法学硕士 (LLM) 的基础知识
- 01 - Introduction
- 01 - Understanding grounding techniques for LLMs
- 02 - Setting up your LLM environment
- 02 - 1. Basic LLM Hallucinations
- 01 - What is a hallucination
- 02 - Hallucination examples
- 03 - Comparing hallucinations across LLMs
- 04 - Dangers of hallucinations
- 05 - Challenge Finding a hallucination
- 06 - Solution Finding a hallucination
- 03 - 2. Types of Hallucinations
- 01 - Training LLMs on time-sensitive data
- 02 - Poorly curated training data
- 03 - Faithfulness and context
- 04 - Ambiguous responses
- 05 - Incorrect output structure
- 06 - Declining to respond
- 07 - Fine-tuning hallucinations
- 08 - LLM sampling techniques and adjustments
- 09 - Bad citations
- 10 - Incomplete information extraction
- 04 - 3. Mitigating Hallucinations
- 01 - Few-shot learning
- 02 - Chain of thought reasoning
- 03 - Structured templates
- 04 - Retrieval-augmented generation
- 05 - Updating LLM model versions
- 06 - Model fine-tuning for mitigating hallucinations
- 07 - Orchestrating workflows through model routing
- 08 - Challenge Automating ecommerce reviews with LLMs
- 09 - Solution Automating ecommerce reviews with LLMs
- 05 - 4. Detecting Hallucinations
- 01 - Creating LLM evaluation pipelines
- 02 - LLM self-assessment pipelines
- 03 - Human-in-the-loop systems
- 04 - Specialized models for hallucination detection
- 05 - Building an evaluation dataset
- 06 - Optimizing prompts with DSPY
- 07 - Optimizing hallucination detections with DSPY
- 08 - Real-world LLM user testing
- 09 - Challenge A more well-rounded AI trivia agent
- 10 - Solution A more well-rounded AI trivia agent
- 06 - 5. Hallucination Paper Review
- 01 - Ragas Evaluation paper
- 02 - Hallucinations in large multilingual translation models
- 03 - Do LLMs know what they don’t know
- 04 - Set the Clock LLM temporal fine-tuning
- 05 - Review of hallucination papers
- 07 - Conclusion
- 01 - Continue your practice of grounding techniques for LLMs