deepseek-coder-vb and codestral 22.2B were asked to produce code regarding three different python generative ai tasks. Responses typically favored code featuring older models, however deepseek-coder’…
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore a comprehensive 1-hour 27-minute live Python demonstration focused on Generative Artificial Intelligence. Dive into the world of hosting Large Language Models (LLMs) and Large Multimodal Models (LMMs) on personal laptops, specifically a MacBook Pro m3 max with 48GB RAM. Witness real-time image processing and prompt responses, with the server achieving speeds of 130+ tokens/second. Discover the utilization of Hugging Chat platform, line-by-line coding with Hugging Face's Blip and Gradio UI, and a LangChain agent executor demo evaluated using LangSmith. Learn about cost determination through OpenAI's Platform and Colab. Explore the setup of Open WebUI with Ollama, Docker, and Portainer for local access to open-source generative AI models. Compare the output quality and speed of multiple LLMs and multimodal models, including moondream 1.8b, llava 7b, and llava 13b. Examine code generation tasks using deepseek-coder-v2:16b and codestral 22.2B, with a focus on Hugging Face's pipeline for high-quality photorealistic image generation. Witness a live demonstration of cross-device functionality as an iPhone 15 Pro successfully interacts with the laptop server to generate accurate responses to image prompts.
Read more
Live Python Demos - Generative Artificial Intelligence