Fine-Tuning Services: Introduction to Mistral's fine-tuning API and services
3
Conversational AI Interface: Introduction to LAT, Mistral's conversational AI tool
4
Latest Model Releases: Newest Mistral models and their features
5
Fine-Tuning Process: Steps and benefits of fine-tuning models
6
Hackathon Winning Projects: Examples of innovative uses of fine-tuning
7
Hands-On Demo Introduction: Introduction to the practical demo segment
8
Setting Up the Demo: Instructions for setting up and running the demo notebook
9
Creating Initial Prompt: Steps to create and test an initial prompt
10
Evaluation Pipeline: Setting up and running an evaluation pipeline for model performance
11
Improving Model Performance: Strategies and techniques to enhance model accuracy
12
Fine-Tuning and Results: Creating and evaluating a fine-tuned model
13
Two-Step Fine-Tuning: Explanation and demonstration of the two-step fine-tuning process
14
Conclusion and final thoughts
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore techniques for reducing hallucinations in large language models through fine-tuning in this hour-long webinar from Weights & Biases. Learn how to leverage out-of-domain data to improve MistralAI models' ability to detect factual inconsistencies. Follow along with a hands-on demonstration that covers creating initial prompts, setting up evaluation pipelines, and implementing a two-step fine-tuning process using the Factual Inconsistency Benchmark dataset and Wikipedia summaries. Discover how Weights & Biases Weave can automate model evaluation and see examples of innovative fine-tuning applications from hackathon winning projects. Gain insights into Mistral's latest models, fine-tuning services, and conversational AI tools to enhance natural language inference in production environments.
Fine-tuning LLMs to Reduce Hallucination - Leveraging Out-of-Domain Data