Build RAG pipelines, configure fine-tuning workflows, and implement AI safety guardrails through hands-on exercises.
Master RAG architecture, model fine-tuning, and responsible AI practices.
Excellent RAG pipeline design!
Excellent fine-tuning configuration!
Excellent safety implementation!
Build a complete RAG (Retrieval-Augmented Generation) pipeline by placing components in the correct order and configuring all parameters properly.
Chunk overlap should be ~10-20% of chunk size for better context preservation. Use text-embedding-3-small for cost efficiency or text-embedding-3-large for better accuracy.
Placing components in wrong order (e.g., Retriever before Vector Store). Make sure chunk overlap is less than chunk size. Don't forget to select an embedding model and vector store.
Configure a complete fine-tuning job for a language model, including dataset preparation, hyperparameter selection, and validation settings.
Start with a learning rate multiplier of 1.0 and adjust based on results. More epochs with smaller learning rates often yields better results than fewer epochs with higher rates.
{"messages": [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
Invalid JSONL syntax (missing commas, wrong quotes). Each line must be valid JSON. Learning rate too high (>2.0) can cause unstable training.
Write guardrail rules to detect and block 4 types of attacks: Prompt Injection, Jailbreak Attempts, PII Extraction, and Harmful Content requests.
Look for manipulation phrases like "ignore previous", "pretend you are", "disregard instructions". For PII attacks, detect requests for SSN, credit cards, passwords.
Rules too narrow (missing variations). Always use "Block Request" for security threats. Don't forget to test each rule before moving to the next scenario.