Главная
Study mode:
on
1
Introduction
2
Problem
3
Pipeline
4
Metrics
5
Sampling
6
Crowdsourcing
7
Aggregation
8
Crowdkit
9
N squared
10
Questions
11
What about crowdsourcing
12
Accuracy vs data set size
13
Expert manual labeling
14
Conclusion
15
Resources
16
QA
17
QA Questions
18
Reshare
19
Leakage
20
Andreas
21
Sampling after sampling
22
Sampling algorithm
23
Pairwise judgments
24
Chat window
25
Upcoming events
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore data labeling techniques for search relevance evaluation in this 54-minute conference talk by Evgeniya Sukhodolskaya, data evangelist and senior ML manager at Toloka. Dive into the ranking problem, commonly used ranking quality metrics, and human-in-the-loop approaches for obtaining relevance judgments at scale. Learn best practices and potential pitfalls in building evaluation pipelines for information retrieval. Gain insights on topics such as sampling, crowdsourcing, aggregation, and the Crowdkit N squared method. Participate in a Q&A session covering accuracy vs. data set size, expert manual labeling, and sampling algorithms. Access valuable resources and discover upcoming events in the field of search relevance evaluation.

Data Labeling for Search Relevance Evaluation

OpenSource Connections
Add to list