Advanced Multi-Cloud Architecture Labs

Master complex multi-cloud scenarios with hands-on labs covering GCP, Azure, AWS, and IBM Cloud. Build enterprise-grade solutions across multiple cloud platforms.

Multi-Cloud Advanced Labs - Module 7

Enterprise-level scenarios for multi-cloud architectures, containerization, and hybrid cloud deployments with real-world GUI interfaces.

Lab 19: Google Anthos Multi-Cloud Kubernetes
Multi-Cloud / Expert

Lab Scenario: Enterprise Multi-Cloud Kubernetes Platform

Business Context: You are a Cloud Architect at GlobalTech Solutions, a multinational corporation running workloads across GCP, AWS, and Azure. Your CTO has mandated a unified Kubernetes management platform to reduce operational complexity and improve security posture.

Your Mission: Deploy Google Anthos to create a centralized control plane that manages Kubernetes clusters across all three cloud providers. Implement service mesh for mTLS encryption, GitOps for configuration management, and enforce security policies across the entire infrastructure.

Key Objectives:

  • Deploy a production-grade GKE cluster with 5 nodes in us-central1
  • Enable Anthos Service Mesh (Istio) for traffic management and security
  • Register external AWS EKS and Azure AKS clusters with Anthos
  • Configure GitOps with Config Management for declarative infrastructure
  • Implement policy enforcement and compliance controls
  • Validate cluster health, connectivity, and security posture

Success Criteria: All configurations must be completed correctly with no validation errors. Your cluster must pass connectivity tests, compliance checks, and security scans.

Step-by-Step Instructions

  1. Step 1: Configure GKE Cluster Basics (Complete ALL fields in Basics tab)
    Cluster name: Enter anthos-prod-cluster
    Region: Select us-central1 (Iowa)
    Zone deployment strategy: Select "Regional (highest availability)" radio button
    Number of nodes: Enter 5
    Machine type: Select n1-standard-4 (4 vCPU, 15 GB) - Recommended
    Boot disk type: Select "Balanced persistent disk (SSD)"
    Boot disk size: Enter 200 GB
    Checkboxes to enable: Check "Enable Workload Identity" AND "Enable GKE Autopilot mode"
    Enable cluster autoscaling: Check this box
    Minimum nodes: Enter 3
    Maximum nodes: Enter 10
    Network tier: Select "Premium (global load balancing)"
    Release channel: Select "Stable (enterprise production)"
    Additional checkboxes: Check "Enable network policy (Calico)"
    Pro Tip: Workload Identity is crucial for secure authentication between GKE and Google Cloud services without service account keys. GKE Autopilot automatically manages cluster infrastructure, scaling, and security patching. Regional deployment provides highest availability across multiple zones.
  2. Step 2: Enable Advanced Anthos Features (Switch to Features tab - Complete ALL fields)
    Click on the "Features" tab in the console.

    Required checkboxes (MUST check these 3):
    ✓ Check "Enable Anthos Service Mesh (Istio-based)"
    ✓ Check "Enable Config Management (GitOps)"
    ✓ Check "Enable Cloud Run for Anthos (Serverless on GKE)"

    Additional optional checkboxes (recommended):
    ✓ Check "Enable Cloud Operations for GKE" - for monitoring
    ✓ Check "Enable Backup for GKE" - for disaster recovery

    External Cluster Registration section:
    AWS EKS Cluster Endpoint URL: Enter https://ABC12345.eks.us-east-1.amazonaws.com
    AWS Region: Select "us-east-1 (N. Virginia)"
    Azure AKS Cluster Name: Enter aks-east-prod
    Azure Resource Group: Enter rg-aks-production
    Connectivity mode: Select "Private (VPN/Interconnect)"
    Explanation: Anthos Service Mesh provides mTLS, traffic routing, and distributed tracing. Config Management enables GitOps workflows, syncing cluster state from Git repositories. External cluster registration allows managing AWS EKS and Azure AKS from the Anthos control plane.
  3. Step 3: Configure Service Mesh (Switch to Networking tab - Complete ALL fields)
    Click on the "Networking" tab in the console.

    Service Mesh Configuration section:
    Istio Version: Select 1.18 (recommended)
    Enable automatic mTLS: Check this checkbox (REQUIRED for encryption)
    Ingress Gateway Mode: Select "AUTO (recommended)"
    Egress Gateway Mode: Select AUTO (automatic egress routing)
    Traffic management policy: Select "Round Robin"
    Enable distributed tracing: Check this checkbox for observability
    Note: Service Mesh provides automatic mTLS between all services, encrypting traffic without code changes. Egress AUTO mode automatically manages outbound connections to external services. Distributed tracing enables request flow visualization across microservices.
  4. Step 4: Configure Policy & Compliance (Switch to Security tab - Complete ALL fields)
    Click on the "Security" tab in the console.

    Policy & Compliance Management section:
    Git Repository URL (Config Sync): Enter https://github.com/your-org/k8s-config
    (Use your actual GitHub/GitLab repo or this example URL)
    Git Branch: Enter main
    Sync Interval (seconds): Enter 30
    Authentication method: Select "Personal Access Token"
    Enable policy enforcement: Check this checkbox (REQUIRED)
    Enable Binary Authorization: Check this for image verification
    Policy bundle: Select "CIS Kubernetes Benchmark"
    Security: Config Sync automatically applies Kubernetes manifests from Git. 30-second sync interval ensures rapid configuration updates. Policy Controller enforces organizational policies like pod security standards, resource quotas, and network policies.
  5. Step 5: Review All Tabs and Verify Configuration
    Review checklist - go through each tab:

    ✓ Basics tab: Cluster name, region (us-central1), 5 nodes, n1-standard-4, Workload Identity & Autopilot enabled
    ✓ Features tab: Service Mesh, Config Management, Cloud Run all enabled; EKS & AKS configured
    ✓ Networking tab: Istio 1.18, mTLS enabled, Egress=AUTO
    ✓ Security tab: Git repo URL (https://...), sync interval=30, policy enforcement enabled
    Validation: Double-check all required fields are filled. The validation will check for exact values (us-central1, 5 nodes, n1-standard-4, version 1.18, sync 30 seconds).
  6. Step 6: Create Cluster and Validate
    After completing all fields in all four tabs:

    1. Scroll to bottom of console panel
    2. Click the blue "Create Anthos Cluster" button
    3. Review validation messages - if errors appear, fix them by revisiting the mentioned tabs
    4. Once "Success!" message appears, your dashboard will show
    5. Click "Validate Configuration" button below the console to check your score
    6. Use diagnostic buttons to test connectivity, security, GitOps status, and metrics
    Testing: After successful creation, test: (1) Connectivity - verifies multi-cloud connections, (2) Security Scan - checks mTLS and policies, (3) GitOps Status - confirms Git sync, (4) View Metrics - shows cluster performance.
Google Cloud Console - Anthos Cluster Configuration
Cluster Configuration
Anthos Features & Add-ons
About Anthos: Anthos is Google Cloud's hybrid and multi-cloud platform that extends GKE to on-premises and other cloud environments, providing consistent deployment and policy management.
External Cluster Registration
Service Mesh Configuration
Policy & Compliance Management
Progress: 0/6 tasks
Score: 0/100
0%

Lab Completed!

Lab 20: Azure Arc Hybrid Infrastructure
Hybrid Cloud / Advanced

Lab Scenario: Hybrid Cloud Management Platform

Business Context: You are the Infrastructure Lead at FinanceCore, a financial services company with on-premises datacenters, AWS workloads, and GCP resources. Regulatory requirements mandate centralized governance and monitoring of all infrastructure regardless of location.

Your Mission: Implement Azure Arc to establish a single control plane for managing servers and Kubernetes clusters across all environments. Configure policy enforcement for compliance, enable comprehensive monitoring, and implement GitOps for consistent deployments.

Key Objectives:

  • Create resource group "rg-arc-hybrid" with proper tagging
  • Register minimum 3 servers from different environments
  • Enable Arc for on-premises Kubernetes clusters
  • Deploy Arc Security Baseline policy initiative
  • Configure Log Analytics with 60-second collection interval
  • Enable VM Insights and Azure Sentinel for security

Success Criteria: All hybrid resources must be visible in Azure Portal with policy compliance above 95% and monitoring data flowing to Log Analytics.

Step-by-Step Instructions

  1. Step 1: Create Resource Group (Complete ALL fields in Basics tab)
    Ensure you are on the "Basics" tab

    Resource Group section:
    Resource group name: Enter rg-arc-hybrid (EXACT name required)
    Subscription: Select any subscription (Production/Development/Test)
    Region: Select East US

    Tags section (BOTH required):
    Environment: Enter Production
    Cost Center: Enter IT-001
    Owner: (Optional) Enter your team name, e.g., "Platform Team"
    Application: (Optional) Enter "Arc Hybrid Management"
    Best Practice: Resource groups are logical containers for Azure resources. The exact name "rg-arc-hybrid" is validated. Tags enable governance, cost tracking, and automated resource management through Azure Policy.
  2. Step 2: Register Arc-Enabled Servers (Switch to Servers tab - Complete fields)
    Click on the "Servers" tab

    Arc-Enabled Servers section (enter at least 3 servers total):
    On-Premises Servers: Enter srv-onprem-01,srv-onprem-02,srv-onprem-03
    AWS EC2 Instance IDs: Enter i-0abc123def,i-0xyz789ghi (or leave blank if you entered 3+ on-prem)
    AWS Region: Select "us-east-1 (N. Virginia)" (if AWS instances provided)
    GCP Instance Names: Enter gcp-vm-01,gcp-vm-02 (or leave blank)
    GCP Project ID: Enter my-gcp-project-id (if GCP instances provided)
    Operating System Filter: Select "All Operating Systems"
    Checkboxes (optional): Check "Enable Microsoft Defender for Cloud" for security
    Explanation: Azure Arc installs Connected Machine agent on servers, enabling Azure management capabilities. You need minimum 3 servers from any combination (on-prem, AWS, or GCP). The agent creates outbound HTTPS connections to Azure - no inbound ports needed.
  3. Step 3: Configure Arc Kubernetes (Switch to Kubernetes tab - Complete ALL fields)
    Click on the "Kubernetes" tab

    Arc-Enabled Kubernetes Clusters section:
    On-Premises Cluster Name: Enter k8s-onprem-prod
    Cluster Location: Select "On-Premises Datacenter"
    Kubernetes Distribution: Select "Vanilla Kubernetes" (or your actual distribution)

    Required checkboxes (MUST check all 3):
    ✓ Check "Enable Azure Monitor for containers"
    ✓ Check "Enable Azure Policy"
    ✓ Check "Enable GitOps (Flux v2)"

    GitOps Configuration section (appears after checking GitOps):
    Source Control Provider: Select "GitHub"
    Repository URL: Enter https://github.com/org/k8s-manifests
    Branch: Enter main
    Path: Enter /clusters/production
    GitOps: Flux v2 automatically syncs manifests from Git to your cluster. When you check the GitOps checkbox, the GitOps Configuration section appears below - fill in all 4 fields there. This enables infrastructure-as-code for your Kubernetes workloads.
  4. Step 4: Configure Azure Policy (Switch to Policy tab - Complete ALL fields)
    Click on the "Policy" tab

    Policy Configuration section:
    Policy Initiative: Select Arc Security Baseline
    Assignment Scope: Select All Arc Resources
    Assignment Name: Enter Arc-Security-Assignment
    Enable automatic remediation: Check this checkbox (REQUIRED)
    Remediation batch size: Enter 10
    Send alerts on policy violations: Check this checkbox for notifications
    Compliance: Arc Security Baseline initiative includes 15+ policies covering encryption, identity, logging, and network security. Automatic remediation deploys fixes to non-compliant resources. Batch size controls how many resources are remediated simultaneously.
  5. Step 5: Configure Monitoring (Switch to Monitoring tab - Complete ALL fields)
    Click on the "Monitoring" tab

    Monitoring & Analytics section:
    Log Analytics Workspace Name: Enter law-arc-monitoring
    Workspace SKU: Select "Pay-As-You-Go (Per GB)"
    Data Retention (days): Select "90 days (recommended)"
    Data Collection Interval (seconds): Enter 60 (EXACT value required)

    Required checkboxes (MUST check both):
    ✓ Check "Enable VM Insights (performance monitoring)"
    ✓ Check "Enable Microsoft Sentinel (SIEM/SOAR)"

    Optional checkboxes (recommended):
    ✓ Check "Enable Change Tracking and Inventory"
    ✓ Check "Enable Update Management"
    Observability: 60-second collection interval provides near real-time monitoring. VM Insights tracks CPU, memory, disk, and network with process-level detail. Sentinel uses AI to detect threats and provides automated response playbooks.
  6. Step 6: Create Configuration and Validate
    Final review and deployment:

    ✓ Review checklist across all tabs:
    Basics: rg-arc-hybrid, East US, Environment & CostCenter tags
    Servers: Minimum 3 servers registered
    Kubernetes: Cluster name, all 3 checkboxes, GitOps config
    Policy: Arc Security Baseline, All Arc Resources scope, remediation enabled
    Monitoring: law-arc-monitoring, 60 seconds, VM Insights & Sentinel enabled

    Deployment steps:
    1. Scroll to bottom of console
    2. Click blue "Create Arc Configuration" button
    3. Fix any validation errors by revisiting mentioned tabs
    4. Click "Validate Configuration" button below for your score
    5. Use diagnostic buttons to test Arc resources, compliance, policy status, and analytics
    Testing: Diagnostic buttons verify: (1) Test Arc Resources - shows connected servers/clusters, (2) Compliance Check - validates security score, (3) Policy Status - shows policy assignments, (4) View Analytics - displays monitoring metrics.
Microsoft Azure Portal - Azure Arc
Home > Azure Arc > Hybrid Infrastructure Configuration
Resource Group
Tags
Tags help organize resources and track costs. Use consistent naming for better governance.
Arc-Enabled Servers
Register servers from on-premises datacenters, AWS, GCP, or other clouds. Minimum 3 servers required.
Arc-Enabled Kubernetes Clusters
Policy Configuration
Monitoring & Analytics
Progress: 0/6 tasks
Score: 0/100
0%

Lab Completed!

Lab 21: IBM Cloud Satellite Edge Deployment
Edge Computing / Advanced

Lab Scenario: Edge Computing Platform Deployment

Business Context: You are the Edge Computing Architect at SmartFactory Inc., deploying IBM Cloud services to 50+ manufacturing plants worldwide. Each location requires local compute for real-time analytics while maintaining connection to IBM Cloud for ML model training.

Your Mission: Deploy IBM Cloud Satellite to create a distributed cloud environment. Install Red Hat OpenShift for container orchestration, deploy Cloud Pak for Data for AI/ML workloads, and configure edge analytics for IoT data processing at each factory location.

Key Objectives:

  • Create Satellite location "factory-edge-01" managed from Dallas
  • Attach minimum 6 hosts (3 control plane, 3 workers)
  • Deploy OpenShift 4.13 cluster with 3 worker nodes
  • Install Cloud Pak for Data with Watson Studio
  • Configure Satellite Link endpoints for cloud connectivity
  • Deploy MQTT broker for IoT device communication

Success Criteria: Satellite location must be healthy with OpenShift running, Cloud Pak services active, and MQTT broker processing IoT messages with cloud sync enabled.

Step-by-Step Instructions

  1. Step 1: Create Satellite Location (Complete ALL fields in Location tab)
    Ensure you are on the "Location" tab

    Satellite Location Details section:
    Location name: Enter factory-edge-01 (EXACT name required)
    Managed from: Select Dallas (us-south)
    Availability Zones: Enter 3
    Location type: Select "Edge Location"
    Resource group: Select "Production"
    Concept: Satellite Location is a logical representation of your physical site (factory, datacenter, or edge). "Managed from Dallas" means the IBM Cloud Dallas region hosts the control plane that manages your edge location. 3 zones enable high availability by distributing hosts across failure domains.
  2. Step 2: Attach Infrastructure Hosts (Switch to Hosts tab - Complete fields)
    Click on the "Hosts" tab

    Infrastructure Hosts section (MUST provide 6+ total hosts):
    RHEL 8 Server IPs: Enter 192.168.1.10,192.168.1.11,192.168.1.12
    AWS EC2 Instance IDs: Enter i-abc123def,i-ghi789jkl,i-mno456pqr
    AWS Region for EC2 instances: Select "us-east-1"
    On-Premises Hostnames: (Optional if you have 6+ from above)
    Host assignment: Select "Automatic (recommended)"
    Automatically assign hosts: Check this checkbox
    Requirements: You need minimum 6 hosts total (3 for Satellite control plane + 3 for OpenShift workers). Each host needs 4+ vCPU, 16+ GB RAM, 100+ GB disk. Automatic assignment distributes hosts optimally across control plane and worker pools.
  3. Step 3: Deploy OpenShift Cluster (Switch to OpenShift tab - Complete ALL fields)
    Click on the "OpenShift" tab

    Red Hat OpenShift Cluster section:
    Cluster name: Enter ocp-satellite-prod
    OpenShift Version: Select 4.13 (recommended LTS)
    Worker pool name: Enter default-pool
    Worker Nodes: Enter 3
    Worker vCPUs per node: Enter 8
    Worker RAM per node (GB): Enter 32
    Pod subnet CIDR: Enter 10.128.0.0/14
    Service subnet CIDR: Enter 172.30.0.0/16
    Enable OpenShift monitoring: Check this checkbox
    Architecture: OpenShift 4.13 is an LTS (Long Term Support) version providing stability for production. Control plane is managed by IBM Cloud while worker nodes run on your Satellite location hosts. Pod and service subnets must not overlap with your existing network ranges.
  4. Step 4: Install Cloud Pak for Data (Switch to Services tab - Complete fields)
    Click on the "Services" tab

    Cloud Pak for Data section:
    Cloud Pak Version: Select "4.7 (stable)" or latest
    Deployment profile: Select "Medium (production)"

    Required checkboxes (MUST check both):
    ✓ Check "Enable Watson Studio (data science)"
    ✓ Check "Enable Data Virtualization"

    Optional checkboxes (recommended):
    ✓ Check "Enable Watson Machine Learning"
    ✓ Check "Enable DataStage (ETL)"

    Storage Size (TB): Enter 1
    Storage class: Select "Rook-Ceph (recommended)"
    Use Cases: Watson Studio provides Jupyter notebooks, AutoAI for automated model creation, and model deployment capabilities. Data Virtualization enables federated SQL queries across multiple data sources without ETL. 1TB storage is minimum for Cloud Pak for Data with Watson services.
  5. Step 5: Configure Satellite Link (Switch to Link tab - Complete ALL fields)
    Click on the "Link" tab

    Satellite Link Endpoints section:
    Endpoint Type: Select Cloud Services (location to cloud)
    Destination service: Select "Cloud Object Storage"
    Endpoint name: Enter cos-edge-link
    Enable DNS resolution: Check this checkbox (REQUIRED)
    Enable TLS encryption: Check this checkbox (REQUIRED)
    TLS version: Select "TLS 1.3 (recommended)"

    Edge Applications section:
    Deploy MQTT Broker: Check this checkbox (REQUIRED)
    MQTT port: Enter 1883
    Data Sync Interval (seconds): Enter 60
    Message retention (hours): Enter 24
    Deploy edge analytics containers: Check this checkbox
    Connectivity: Link endpoints create secure tunnels from your Satellite location to IBM Cloud services without opening inbound firewall ports. TLS 1.3 provides strongest encryption. MQTT broker handles IoT device messages, and 60-second sync interval uploads data to Cloud Object Storage every minute.
  6. Step 6: Create Satellite Location and Validate
    Final review and deployment:

    ✓ Review checklist across all tabs:
    Location: factory-edge-01, Dallas, 3 zones
    Hosts: Minimum 6 hosts from RHEL/AWS/on-premises
    OpenShift: Version 4.13, 3 workers, 8 vCPU, 32GB RAM each
    Services: Cloud Pak 4.7, Watson Studio + Data Virt, 1TB storage
    Link: Cloud Services endpoint, DNS + TLS enabled, MQTT broker, 60s sync

    Deployment steps:
    1. Scroll to bottom of console
    2. Click blue "Create Satellite Location" button
    3. Fix any validation errors by revisiting mentioned tabs
    4. Click "Validate Configuration" button below for your score
    5. Use diagnostic buttons to test Link status, edge analytics, OpenShift, and Cloud Pak
    Testing: Diagnostic buttons verify: (1) Test Link Status - checks Satellite connectivity, (2) Edge Analytics - shows MQTT throughput, (3) OpenShift Status - validates cluster health, (4) Cloud Pak Status - confirms Watson services are running.
IBM Cloud Console - Satellite Location Configuration
Satellite Location Details
Create a location to represent your on-premises, edge, or multi-cloud environment
Infrastructure Hosts
Minimum 6 hosts required: 3 for control plane, 3 for workers. Each host needs 4+ vCPU, 16+ GB RAM, 100+ GB disk.
Red Hat OpenShift Cluster
Cloud Pak for Data
Progress: 0/6 tasks
Score: 0/100
0%

Lab Completed!