Главная
Study mode:
on
1
Function Calling - Cheap, Fast, Local and Enterprise
2
Video Overview
3
Tool Use Flow Chart
4
Function preparation tips
5
Resources trelis.com/ADVANCED-inference
6
Code walk-through for function / tool preparation
7
Prompt preparation and Recursive tool use
8
GPT4o-mini tool use performance
9
Zero shot prompting and Runpod Phi-3 endpoint setup
10
Phi-3 Mini Zero Shot Performance
11
Parallel function calling with Phi-3
12
Low latency tool use with Groq - Zero shot
13
Groq Llama 3 8B Zero Shot Tool Use Performance
14
Groq Llama 3 8B Fine-tune Performance
15
Groq Llama 3 70B Fine-tune Performance
16
Local Phi-3 Inference on a Mac with llama.cpp
17
Final Tool Use Tips
18
Resources
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore advanced techniques for leveraging Language Model (LLM) tools in this comprehensive video tutorial. Dive into function calling methodologies for cheap, fast, local, and enterprise applications. Learn about tool use flow charts, function preparation tips, and code walk-throughs for effective tool integration. Discover how to implement prompt preparation and recursive tool use strategies. Evaluate the performance of various models including GPT4o-mini, Phi-3 Mini, and Llama 3 in zero-shot and fine-tuned scenarios. Gain insights into parallel function calling, low latency tool use with Groq, and local inference on Mac using llama.cpp. Access valuable resources and receive final tool use tips to enhance your LLM tool utilization skills.

LLM Tool Use - GPT4o-mini, Groq, and Llama.cpp

Trelis Research
Add to list
0:00 / 0:00