法学硕士 (LLM) 的基础知识
上次更新时间:2024-09-27
课程售价: 2.9 元
联系右侧微信客服充值或购买课程
课程内容
01 - Introduction
02 - 1. Basic LLM Hallucinations
03 - 2. Types of Hallucinations
- 01 - Training LLMs on time-sensitive data
- 02 - Poorly curated training data
- 03 - Faithfulness and context
- 04 - Ambiguous responses
- 05 - Incorrect output structure
- 06 - Declining to respond
- 07 - Fine-tuning hallucinations
- 08 - LLM sampling techniques and adjustments
- 09 - Bad citations
- 10 - Incomplete information extraction
04 - 3. Mitigating Hallucinations
- 01 - Few-shot learning
- 02 - Chain of thought reasoning
- 03 - Structured templates
- 04 - Retrieval-augmented generation
- 05 - Updating LLM model versions
- 06 - Model fine-tuning for mitigating hallucinations
- 07 - Orchestrating workflows through model routing
- 08 - Challenge Automating ecommerce reviews with LLMs
- 09 - Solution Automating ecommerce reviews with LLMs
05 - 4. Detecting Hallucinations
- 01 - Creating LLM evaluation pipelines
- 02 - LLM self-assessment pipelines
- 03 - Human-in-the-loop systems
- 04 - Specialized models for hallucination detection
- 05 - Building an evaluation dataset
- 06 - Optimizing prompts with DSPY
- 07 - Optimizing hallucination detections with DSPY
- 08 - Real-world LLM user testing
- 09 - Challenge A more well-rounded AI trivia agent
- 10 - Solution A more well-rounded AI trivia agent
06 - 5. Hallucination Paper Review
07 - Conclusion
课程内容
7个章节 , 43个讲座
01 - Introduction
02 - 1. Basic LLM Hallucinations
03 - 2. Types of Hallucinations
- 01 - Training LLMs on time-sensitive data
- 02 - Poorly curated training data
- 03 - Faithfulness and context
- 04 - Ambiguous responses
- 05 - Incorrect output structure
- 06 - Declining to respond
- 07 - Fine-tuning hallucinations
- 08 - LLM sampling techniques and adjustments
- 09 - Bad citations
- 10 - Incomplete information extraction
04 - 3. Mitigating Hallucinations
- 01 - Few-shot learning
- 02 - Chain of thought reasoning
- 03 - Structured templates
- 04 - Retrieval-augmented generation
- 05 - Updating LLM model versions
- 06 - Model fine-tuning for mitigating hallucinations
- 07 - Orchestrating workflows through model routing
- 08 - Challenge Automating ecommerce reviews with LLMs
- 09 - Solution Automating ecommerce reviews with LLMs
05 - 4. Detecting Hallucinations
- 01 - Creating LLM evaluation pipelines
- 02 - LLM self-assessment pipelines
- 03 - Human-in-the-loop systems
- 04 - Specialized models for hallucination detection
- 05 - Building an evaluation dataset
- 06 - Optimizing prompts with DSPY
- 07 - Optimizing hallucination detections with DSPY
- 08 - Real-world LLM user testing
- 09 - Challenge A more well-rounded AI trivia agent
- 10 - Solution A more well-rounded AI trivia agent