Build multi-modal AI pipelines, conduct security red-teaming, and design production AI architectures through hands-on labs.
GenAI Expert Labs - Module 6
Multi-modal systems, security testing, and architecture design.
Lab 16: RAG System Configuration
RAG Architecture / Expert
Scenario: Enterprise Knowledge Base RAG
LegalTech Corp needs a Retrieval-Augmented Generation system for their 50,000 legal documents. Configure all components including embeddings, vector database, chunking strategy, retrieval parameters, and LLM settings. The system must handle 1,000 queries/hour with 95%+ relevance.
Learning Objectives:
Embedding Models: Select appropriate embedding dimensions and models
Chunking Strategy: Configure optimal chunk sizes and overlap
Vector Database: Set up indexing and search parameters
Based on your configuration, calculate the following metrics:
50,000 docs × 15,000 tokens avg / chunk_size
total_tokens / 1M × embed_price
1000 queries/hr × 24 × 30 × token costs
HNSW: ~50ms, IVF: ~100ms, Flat: ~500ms
Progress:0/26 fields configured
Score: 0/100
0%
Lab Completed!
Excellent RAG configuration!
Lab 17: LLM Security Red Team
Security / Critical
Scenario: AI Security Assessment
BankSecure AI deployed a customer service chatbot that handles sensitive financial queries. Conduct a red team assessment to identify vulnerabilities, craft attack vectors, and design defensive measures to harden the system.
Learning Objectives:
Attack Taxonomy: Understand prompt injection, jailbreaks, data exfiltration
Vulnerability Testing: Craft and test attack payloads
Identify 3 attack vectors, craft test payloads for each, and design corresponding defense mechanisms. Each attack must include the vulnerability type, sample payload, and mitigation strategy.
Known Attack Categories
• Direct Prompt Injection
• Indirect Prompt Injection
• Jailbreak Attempts
• Data Exfiltration
• Model Extraction
• Denial of Service
• PII Extraction
• System Prompt Leakage
Attack Vectors (0/3 required)
No attack vectors defined. Add attacks to begin red team assessment.
Defense Configuration
Configure defenses for each identified vulnerability. Each defense must address the specific attack vector.
Add attack vectors first to configure defenses.
Progress:0/5 tasks completed
Score: 0/100
0%
Lab Completed!
Excellent security assessment!
Lab 18: LLM Token & Cost Calculator
Cost Analysis / Expert
Scenario: Production Chatbot Cost Estimation
TechSupport Inc. is launching an AI chatbot and needs to estimate operational costs. Using the provided traffic data and model pricing, calculate token usage and select the most cost-effective model that stays within budget.
Model Selection: Choose optimal model within budget
System Prompts: Account for per-conversation overhead
Token Cost Calculator
Calculate token costs
📋 Task: Calculate LLM Operational Costs
Using the scenario data and model pricing below, calculate total daily tokens, select the most cost-effective model under budget, and compute the daily cost. All answers have exact correct values.
Scenario Data
Your chatbot receives the following daily traffic:
• Daily conversations: 5,000
• Avg messages per conversation: 6
• Avg input tokens per message: 150
• Avg output tokens per message: 200
• System prompt tokens: 500
• Budget limit: $800/day
Model Pricing (per 1M tokens)
Model
Input
Output
Context
GPT-4 Turbo
$10.00
$30.00
128K
GPT-4o
$2.50
$10.00
128K
GPT-3.5 Turbo
$0.50
$1.50
16K
Claude 3 Sonnet
$3.00
$15.00
200K
Claude 3 Haiku
$0.25
$1.25
200K
Task 1: Calculate Total Daily Messages
How many total messages does the system process per day?