Главная
Study mode:
on
1
[] Matar's preferred coffee
2
[] Takeaways
3
[] The talk that stood out
4
[] Online hate speech challenges
5
[] Evaluate harmful media API
6
[] Content moderation: AI models
7
[] Optimizing speed and accuracy
8
[] Cultural reference AI training
9
[] Functional Tests
10
[] Continuous adaptation of AI
11
[] AI detection concerns
12
[] Fine-Tuned vs Off-the-Shelf
13
[] Monitoring Transformer Model Hallucinations
14
[] Auditing process ensures accuracy
15
[] Testing strategies for ML
16
[] Modeling hate speech deployment
17
[] Improving production code quality
18
[] Finding balance in Moderation
19
[] Model's expertise: Cultural Sensitivity
20
[] Wrap up
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the challenges and solutions of detecting harmful content at scale in this 51-minute podcast episode featuring Matar Haller, VP of Data & AI at ActiveFence. Dive into the complexities of online platform abuse, including brand and legal risks, user experience impacts, and the blurred line between online and offline harm. Learn about AI-driven content moderation, optimizing speed and accuracy, cultural sensitivity in AI training, and continuous adaptation to evolving threats. Discover strategies for testing and deploying machine learning models, monitoring hallucinations in transformer models, and balancing moderation efforts. Gain insights into improving production code quality and addressing AI detection concerns in the ever-changing landscape of online content moderation.

AI for Good - Detecting Harmful Content at Scale

MLOps.community
Add to list
0:00 / 0:00