Database Expert Labs

Master MongoDB document modeling, PostgreSQL query optimization, and Redis caching strategies. Write real commands, execute actual queries, and configure production-grade systems.

Database Administration - Module 8

Hands-on labs requiring you to write actual database commands and queries. No shortcuts!

Lab 22: MongoDB NoSQL Administration
NoSQL / Expert
Scenario: E-Commerce Product Catalog
ShopGlobal needs a scalable document database for their product catalog. You must write actual MongoDB commands to create the database, design document schemas, build indexes, and write aggregation pipelines. Each command will be validated for correctness - you cannot proceed without writing proper syntax.

Learning Objectives:

  • Schema Design: Write insertOne() with all required fields
  • Indexing: Create compound indexes with correct syntax
  • Aggregation: Build multi-stage pipelines with $match, $group, $sort
  • Sharding: Configure shard keys and enable sharding

?? Step-by-Step Instructions

  1. Step 1: Create Database & Collection
    Switch to database and create collection with JSON schema validation.
    Required Commands (type EXACTLY):
    1. use shopglobal
    2. db.createCollection("products")
    ?? Tip: Database name must be shopglobal, collection must be products
  2. Step 2: Insert Product Document
    Insert a product with ALL required fields using proper BSON types.
    Required Fields:
    sku: string (e.g., "PROD-001")
    name: string
    price: number (must be > 0)
    category: string
    inventory: number (must be >= 0)
    tags: array of strings
    ?? Tip: Use db.products.insertOne({...}) with ALL 6 fields!
  3. Step 3: Create Compound Index
    Create a compound index for efficient category + price queries.
    EXACT Syntax Required:
    db.products.createIndex({category: 1, price: -1}, {name: "idx_cat_price"})
    ?? Tip: Index name MUST be idx_cat_price. Category ascending (1), price descending (-1).
  4. Step 4: Write Aggregation Pipeline
    Create a pipeline to calculate average price per category for in-stock items.
    Pipeline Requirements:
    � Stage 1: $match where inventory > 0
    � Stage 2: $group by category, calculate avgPrice using $avg
    � Stage 3: $sort by avgPrice descending (-1)
    ?? Tip: Use _id: "$category" in $group stage. All 3 stages required!
  5. Step 5: Create Text Index
    Create a text index for full-text search on product names and tags.
    EXACT Syntax Required:
    db.products.createIndex({name: "text", tags: "text"})
    ?? Tip: Text indexes allow $text queries for search functionality.
  6. Step 6: Enable Sharding
    Configure sharding with a hashed shard key on SKU.
    Required Commands:
    1. sh.enableSharding("shopglobal")
    2. sh.shardCollection("shopglobal.products", {sku: "hashed"})
    ?? Tip: Hashed shard key ensures even data distribution.
MongoDB Shell (mongosh)
mongosh 2.0 -- localhost:27017
MongoDB Shell v2.0.0 Connecting to: mongodb://localhost:27017 Using MongoDB: 7.0.0 Connected to replica set "rs0" test> Type your commands below. Press Enter to execute.
test>
Command Reference
use <database> - Switch database
db.createCollection("name") - Create collection
db.collection.insertOne({...}) - Insert document
db.collection.createIndex({...}) - Create index
db.collection.aggregate([...]) - Run pipeline
Progress: 0/6 tasks completed
Score: 0/100
0%

Lab Completed!

Excellent MongoDB administration!

Lab 23: PostgreSQL Query Optimization
SQL / Expert
Scenario: Financial Transaction Analysis
FinTech Analytics needs optimized SQL for their transaction processing system. You must write complete SQL statements - EXPLAIN ANALYZE queries, CREATE INDEX statements, partitioned tables, and optimized CTEs. Each query is validated for syntax and required clauses.

Learning Objectives:

  • Query Analysis: Write EXPLAIN ANALYZE statements
  • Index Creation: Create composite indexes with INCLUDE clause
  • Partitioning: Design PARTITION BY RANGE tables
  • Optimization: Write CTEs with window functions

?? Step-by-Step Instructions

  1. Step 1: Analyze Query Performance
    Write an EXPLAIN ANALYZE query to identify performance issues.
    Required Query:
    Must include: EXPLAIN ANALYZE
    Must SELECT from: transactions table
    Must have: WHERE clause with date filter
    Must have: ORDER BY clause
    ?? Tip: Example: EXPLAIN ANALYZE SELECT * FROM transactions WHERE created_at >= '2024-01-01' ORDER BY amount DESC;
  2. Step 2: Create Composite Index
    Create an index to eliminate sequential scans identified in Step 1.
    Required Syntax:
    � Must use: CREATE INDEX
    � Index name: idx_txn_account_date
    � Table: transactions
    � Columns: account_id, created_at DESC
    � Must have: INCLUDE (amount)
    ?? Tip: INCLUDE creates a covering index - query can be answered from index alone!
  3. Step 3: Create Partitioned Table
    Design a table with range partitioning on timestamp column.
    Required Elements:
    CREATE TABLE transactions_part
    � Columns: id, account_id, amount, created_at TIMESTAMP
    � Must include: PARTITION BY RANGE (created_at)
    ?? Tip: Partitioning enables partition pruning - only relevant partitions are scanned.
  4. Step 4: Create Table Partition
    Create a monthly partition for January 2024 data.
    Required Syntax:
    CREATE TABLE transactions_2024_01 PARTITION OF transactions_part
    FOR VALUES FROM ('2024-01-01') TO ('2024-02-01')
    ?? Tip: Range is [start, end) - end date is exclusive!
  5. Step 5: Write Optimized CTE Query
    Create a Common Table Expression with window functions.
    Required Elements:
    � Must start with: WITH clause (CTE)
    � Must use: SUM() OVER (PARTITION BY
    � Must include: GROUP BY in final query
    ?? Tip: CTEs improve readability. Window functions avoid self-joins.
  6. Step 6: Verify Optimization with EXPLAIN
    Re-run EXPLAIN ANALYZE to confirm Index Scan instead of Seq Scan.
    Required:
    � Must use: EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
    � Query must reference the index: idx_txn_account_date
    ?? Tip: BUFFERS shows cache hits/misses. Look for "Index Scan" in output.
PostgreSQL Query Editor
fintech_db | postgres@localhost:5432 PostgreSQL 16.0
Write and execute SQL to see results
Progress: 0/6 tasks completed
Score: 0/100
0%

Lab Completed!

Excellent query optimization!

Lab 24: Redis Commands & Data Structures
Cache / Expert
Scenario: Real-Time Session Management
SocialHub needs Redis for session management, caching, and real-time features. You must type actual Redis commands - SET, HSET, ZADD, PUBLISH, and configuration commands. Each command is validated for correct syntax and required parameters.

Learning Objectives:

  • String Operations: SET with EX/PX expiration options
  • Hash Operations: HSET for session data storage
  • Sorted Sets: ZADD for rate limiting and leaderboards
  • Pub/Sub: SUBSCRIBE and PUBLISH for real-time messaging

?? Step-by-Step Instructions

  1. Step 1: Configure Memory & Eviction
    Set Redis memory limit and eviction policy using CONFIG SET.
    Required Commands:
    1. CONFIG SET maxmemory 256mb
    2. CONFIG SET maxmemory-policy allkeys-lru
    ?? Tip: Both commands required. allkeys-lru evicts least recently used keys when memory is full.
  2. Step 2: Create Session with TTL
    Store a user session using SET with expiration.
    Required Syntax:
    SET session:user:1001 "{"user_id":1001,"role":"admin"}" EX 3600

    � Key format: session:user:<id>
    � Must have: EX (seconds) or PX (milliseconds)
    � TTL must be: 3600 seconds (1 hour)
    ?? Tip: Use EX for seconds, PX for milliseconds. JSON value must be quoted.
  3. Step 3: Store User Profile in Hash
    Use HSET to store structured user profile data.
    Required Command:
    HSET user:1001:profile name "John Doe" email "john@example.com" role "admin"

    � Key format: user:<id>:profile
    � Must have ALL fields: name, email, role
    ?? Tip: HSET allows multiple field-value pairs in one command.
  4. Step 4: Implement Rate Limiter
    Use sorted set with timestamps for sliding window rate limiting.
    Required Commands:
    1. ZADD ratelimit:api:192.168.1.1 <timestamp> <request_id>
    2. ZREMRANGEBYSCORE ratelimit:api:192.168.1.1 0 <old_timestamp>
    3. ZCARD ratelimit:api:192.168.1.1

    � Key format: ratelimit:api:<ip>
    ?? Tip: Score = timestamp, Member = request ID. ZCARD counts requests in window.
  5. Step 5: Setup Pub/Sub Channel
    Create a notification channel and publish a message.
    Required Commands:
    1. SUBSCRIBE notifications:user:1001
    2. PUBLISH notifications:user:1001 "{"type":"message","content":"Hello"}"

    � Channel format: notifications:user:<id>
    ?? Tip: SUBSCRIBE blocks until message received. Use separate connection for PUBLISH.
  6. Step 6: Create Leaderboard with ZADD
    Build a game leaderboard using sorted sets with scores.
    Required Commands:
    1. ZADD leaderboard:game1 1500 "player1" 2300 "player2" 1800 "player3"
    2. ZREVRANGE leaderboard:game1 0 2 WITHSCORES

    � Must add at least 3 players with scores
    � ZREVRANGE gets top players (highest scores first)
    ?? Tip: ZREVRANGE 0 2 returns top 3 (indices 0, 1, 2). WITHSCORES includes the scores.
Redis CLI
redis-cli -- 127.0.0.1:6379
Redis 7.2.0 Connected to 127.0.0.1:6379 PONG 127.0.0.1:6379> Type Redis commands below.
127.0.0.1:6379>
Command Reference
SET key value [EX seconds] - Set string
HSET key field value [field value ...] - Set hash
ZADD key score member [score member ...] - Add to sorted set
PUBLISH channel message - Publish message
CONFIG SET param value - Set config
Progress: 0/6 tasks completed
Score: 0/100
0%

Lab Completed!

Excellent Redis mastery!