Главная
Study mode:
on
1
Introduction
2
AIs Rapid Evolution
3
AI Infrastructure
4
Trends
5
Power Usage
6
Silicon Diversity
7
Training
8
Fine Tuning
9
Rag
10
David McIntyre
11
Rob For Fabric
12
AI optimized Ethernet example
13
Wire it differently
14
Cost per bit
15
Summary
16
QA
17
Endpoint Inference
18
Edge Inference
19
Question of the Day
20
Conclusion
Description:
Join a detailed webinar featuring experts from NVIDIA, Intel, and Dell who explore the often-overlooked technical and infrastructure costs of implementing generative AI technologies. Delve into crucial enterprise considerations including scalability challenges, computational demands of Large Language Model inferencing, fabric requirements, and sustainability impacts from increased power consumption and cooling needs. Learn practical strategies for cost optimization by comparing on-premises versus cloud deployments, and discover how to leverage pre-trained models for specific market domains. Through comprehensive discussions on AI infrastructure trends, silicon diversity, training methodologies, and both endpoint and edge inference, gain valuable insights into managing and reducing the environmental and financial impact of AI implementations.

Addressing the Hidden Infrastructure and Sustainability Costs of AI in Enterprise

SNIAVideo
Add to list
0:00 / 0:00