Master intermediate cloud architectures across GCP, Azure, AWS, and IBM Cloud. Build cross-cloud solutions and implement hybrid cloud strategies.
Multi-Cloud Integration - Module 6
Intermediate-level labs focusing on multi-cloud architectures, hybrid deployments, and cross-cloud integration patterns.
Lab 16: GCP Kubernetes Engine & Cloud Run
GCP / Intermediate
Scenario: Serverless Container Orchestration
TechInnovate needs to modernize their microservices architecture using GCP's container services. Deploy a GKE cluster with auto-scaling, implement Cloud Run for serverless workloads, configure Anthos Service Mesh for service-to-service communication, and set up Cloud Build for CI/CD. Integrate with Cloud SQL and Cloud Storage for data persistence.
Learning Objectives:
GKE Management: Deploy and manage Kubernetes clusters on GCP
Service Mesh: Configure Anthos for microservices communication
CI/CD Pipeline: Set up Cloud Build automation
📋 Step-by-Step Instructions
Step 1: Create GKE Cluster
🎯 Goal: Deploy a regional GKE cluster with auto-scaling
📝 What to do:
1. Fill in the cluster name: tech-cluster
2. Select zone: us-central1-a
3. Set number of nodes: 3
4. Select machine type: e2-standard-4
5. Enable cluster autoscaling checkbox
6. Click "Create Cluster" button
💡 Tip: GKE automatically manages master nodes. You only configure worker nodes.
Step 2: Configure Node Pool
🎯 Goal: Set up autoscaling parameters for the node pool
📝 What to do:
1. Set minimum nodes: 1
2. Set maximum nodes: 10
3. Select disk type: pd-standard
4. Set disk size: 100 GB
5. Click "Configure Node Pool" button
Step 3: Deploy Cloud Run Service
🎯 Goal: Deploy a serverless container service
📝 What to do:
1. Enter service name: api-service
2. Select region: us-central1
3. Container image: gcr.io/tech-innovate/api:v1
4. Set memory: 512 MB
5. Set CPU: 1 vCPU
6. Select authentication: Allow unauthenticated
7. Click "Deploy Service" button
💡 Tip: Cloud Run automatically scales from 0 to N based on traffic.
Step 4: Configure Service Mesh
🎯 Goal: Enable Istio-based service mesh
📝 What to do:
1. Enable Anthos Service Mesh checkbox
2. Select mesh profile: asm-managed
3. Enable traffic management
4. Enable observability features
5. Click "Enable Service Mesh" button
Step 5: Set Up Cloud Build
🎯 Goal: Create CI/CD pipeline with Cloud Build
📝 What to do:
1. Enter build trigger name: main-build
2. Select repository: github.com/tech/app
3. Branch pattern: ^main$
4. Build configuration: cloudbuild.yaml
5. Click "Create Build Trigger" button
Step 6: Integrate Cloud SQL
🎯 Goal: Connect GKE to Cloud SQL database
📝 What to do:
1. Enter instance ID: tech-postgres
2. Database version: POSTGRES_14
3. Instance type: db-g1-small
4. Enable private IP checkbox
5. Enter database name: techdb
6. Click "Create Instance" button
GlobalManufacturing has resources across on-premises data centers, Azure, and AWS. Implement Azure Arc to manage this hybrid environment centrally. Enable Arc on servers across locations, deploy Azure policies for compliance, configure Azure Monitor for unified observability, and implement GitOps for Kubernetes clusters.
Learning Objectives:
Azure Arc Setup: Enable Arc for servers and Kubernetes
Policy Management: Deploy Azure Policy across hybrid resources
GitOps Implementation: Configure Flux for K8s deployments
Unified Monitoring: Set up Azure Monitor for all resources
📋 Step-by-Step Instructions
Step 1: Onboard Servers to Azure Arc
🎯 Goal: Connect on-premises and AWS servers to Azure Arc
📝 What to do:
1. Select servers to onboard (check at least 2)
2. Select resource group: Arc-Hybrid-RG
3. Choose location: East US
4. Add tags: Environment=Production
5. Click "Onboard Servers" button
💡 Tip: Azure Arc extends Azure management to any infrastructure.
Step 2: Connect Kubernetes Cluster
🎯 Goal: Register on-prem K8s cluster with Arc
📝 What to do:
1. Enter cluster name: onprem-cluster
2. Select subscription: Production-Sub
3. Resource group: Arc-Hybrid-RG
4. Location: East US
5. Click "Connect Cluster" button
Step 3: Assign Azure Policies
🎯 Goal: Apply compliance policies across hybrid resources
📝 What to do:
1. Select policy initiative: Azure Security Benchmark
2. Choose scope: Arc-Hybrid-RG
3. Set compliance level: Audit
4. Enable remediation task checkbox
5. Click "Assign Policy" button
Step 4: Configure GitOps
🎯 Goal: Set up Flux for automated K8s deployments
📝 What to do:
1. Enter configuration name: prod-gitops
2. Git repository URL: https://github.com/global/k8s-config
3. Branch: main
4. Path: /clusters/production
5. Sync interval: 5 minutes
6. Click "Enable GitOps" button
Step 5: Set Up Azure Monitor
🎯 Goal: Enable unified monitoring and logging
📝 What to do:
1. Create workspace name: arc-monitor-workspace
2. Select pricing tier: Per GB
3. Enable VM Insights checkbox
4. Enable Container Insights checkbox
5. Data retention: 90 days
6. Click "Create Workspace" button
Step 6: Configure Alerts
🎯 Goal: Set up monitoring alerts
📝 What to do:
1. Alert rule name: high-cpu-alert
2. Select metric: CPU Percentage
3. Threshold: 80%
4. Evaluation frequency: 5 minutes
5. Select severity: Warning (Sev 2)
6. Add action group
7. Click "Create Alert Rule" button
🎯 Goal: Verify compliance and download detailed analysis report
📝 What to do:
1. Navigate to Compliance tab (click 4th tab)
2. View overall compliance score and metrics
3. Review non-compliant resources list
4. Click "Generate Compliance Report" button
5. Click the downloaded PDF link to open and analyze your compliance report
6. Review all configurations, policies, and recommendations in the PDF
💡 Tip: The PDF contains detailed analysis of all your configurations - review it carefully!
Click the PDF filename above to download and review your detailed compliance analysis
Progress:0/7 tasks completed
Score: 0/100
Azure Arc Management Dashboard
Connected Servers
3
On-prem + AWS
Kubernetes Clusters
1
Arc-enabled
Policy Compliance
87%
18 non-compliant
GitOps Configs
Synced
prod-gitops active
Monitor Workspace
Active
VM & Container Insights
Alert Rules
1
high-cpu-alert
0%
Lab Completed!
Lab 18: Multi-Cloud Disaster Recovery (AWS + IBM Cloud)
Multi-Cloud / Advanced
Scenario: Cross-Cloud DR Strategy
FinanceCorp requires a robust disaster recovery solution spanning AWS (primary) and IBM Cloud (secondary). Design and implement a multi-cloud DR architecture with RTO of 4 hours and RPO of 1 hour. Configure VPC peering, set up database replication between AWS RDS and IBM Cloud Databases, implement automated failover orchestration, and establish continuous data synchronization.
Learning Objectives:
DR Architecture: Design multi-cloud disaster recovery topology
Data Replication: Set up cross-cloud database synchronization
📝 What to do:
1. Failover plan name: finance-dr-plan
2. Select failover mode: Test (radio button)
3. Enable health checks checkbox
4. Health check interval: 5 minutes
5. Add failover notification email
6. Click "Create Failover Plan" button
Step 7: Test DR Failover
🎯 Goal: Execute and validate DR test
📝 What to do:
1. Review DR dashboard status
2. Verify replication sync is current
3. Click "Initiate Test Failover" button
4. Wait for failover completion
5. Validate DR metrics
6. Click "Generate DR Report" button