Главная
Study mode:
on
1
LLMOps: Quantization models & Inference ONNX Generative Runtime #datascience #machinelearning
Description:
Explore the world of LLMOps through a 30-minute video focusing on quantization models and inference using ONNX Generative Runtime. Learn how to install ONNX runtime with GPU support and perform inference with a generative model, specifically using a Phi3-mini-4k quantized to 4int. Dive into the process of converting an original Phi3-mini-128k into a 4int quantized version using the ONNX runtime. Access the accompanying notebook on GitHub to follow along and gain hands-on experience in this cutting-edge area of data science and machine learning.

LLMOps: Quantization Models and Inference with ONNX Generative Runtime

The Machine Learning Engineer
Add to list