Главная
Study mode:
on
1
On Device LLM
2
Octopus v2 Function Calling Apple, Google
3
CODE Anthropic, Cohere Function Calling
4
Implications for NVIDIA, Microsoft?
Description:
Learn about cutting-edge developments in on-device Large Language Models (LLMs) through this technical video that explores functional token fine-tuning and the Octopus v2 framework. Dive into Stanford University's research on implementing efficient function calling capabilities for edge devices like iPhones and Pixels using GEMMA 2B. Examine practical code implementations for function calling across major AI platforms including OpenAI, Anthropic/Claude 3, and Cohere Command R PLUS. Understand how functional tokens significantly enhance energy efficiency in LLM function calls and explore the broader implications for industry leaders like NVIDIA and Microsoft. Master the technical aspects of implementing AI agents with improved accuracy and inference speed through hands-on demonstrations and real-world applications.

On-Device LLMs with Functional Token Fine-Tuning - Octopus v2 Implementation

Discover AI
Add to list
0:00 / 0:00