Главная
Study mode:
on
1
Introduction
2
Feature Vectors in the Iris Data Set
3
Good Pet Data Set
4
Possible Decision Trees
5
Interpreting Models
6
Building a Decision Tree in MLlib
7
Evaluating a Decision Tree
8
Better Than Random Guessing?
9
Decisions Should Make Lower Impurity Subsets
10
Tuning Hyperparameters
11
How to Create a Crowd?
12
Trees See Subsets of Examples
13
Or Subsets of Features
14
Diversity of Opinion
15
Random Decision Forests
Description:
Explore the world of Random Decision Forests on Apache Spark in this 53-minute conference talk from GOTO Amsterdam 2015. Dive into machine learning concepts as Sean Owen, Director of Data Science at Cloudera, guides you through feature vectors, decision trees, and model interpretation. Learn how to build and evaluate decision trees using MLlib, understand the importance of impurity reduction in decision-making, and discover techniques for tuning hyperparameters. Delve into advanced topics such as creating diverse opinions through subsets of examples and features, culminating in an exploration of Random Decision Forests. Gain practical insights into Apache Spark's capabilities for data scientists, including its distributed nature, REPL environment, and Python APIs alongside native Scala support.

A Taste of Random Decision Forests on Apache Spark

GOTO Conferences
Add to list
0:00 / 0:00