Главная
Study mode:
on
1
Intro
2
Cox Automotive
3
KPMG Lighthouse
4
What is this talk about?
5
What do we mean by 'Data Pipeline?
6
Who is in a data team?
7
What do we need to think about when building a pipeline?
8
What about the business logic?
9
What about deployments?
10
What are the main challenges?
11
How were we dealing with the main challenges?
12
Could we make better use of the skills in the team?
13
What tools and frameworks would we need to provide?
14
How would we design a Data Engineering framework?
15
How would we like to manage deployments?
16
Simpler data ingestion
17
Simpler business logic development
18
Simpler environment management
19
Simpler deployments
Description:
Explore best practices for building and deploying data pipelines in Apache Spark in this 41-minute conference talk by Vicky Avison from Databricks. Learn about key considerations such as performance, idempotency, reproducibility, and tackling the small file problem when constructing data pipelines. Discover a common Data Engineering toolkit that separates production concerns from business logic, enabling non-Data-Engineers to define pipelines efficiently. Examine Waimak, an open-source library for Apache Spark, which streamlines the transition from prototype to production. Gain insights into new approaches and best practices for deploying data pipelines, an often overlooked aspect of Data Engineering. Understand the composition of data teams, challenges in pipeline development, and strategies for leveraging team skills effectively. Explore tools, frameworks, and design principles for creating a robust Data Engineering framework, along with simplified methods for data ingestion, business logic development, environment management, and deployments. Read more

Best Practices for Building and Deploying Data Pipelines in Apache Spark

Databricks
Add to list
0:00 / 0:00