Главная
Study mode:
on
1
Introduction
2
System Architecture
3
Creating resource groups on Azure
4
Setting up the medallion architecture storage account
5
Setting up Azure Data Factory
6
Azure Key Vault setup for secrets
7
Azure database with automatic data population
8
Azure Data Factory pipeline orchestration
9
Setting up Databricks
10
Azure Databricks Secret Scope and Key Vault
11
Verifying Databricks - Key Vault - Secret Scope Integration
12
Azure Data Factory - Databricks Integration
13
DBT Setup
14
DBT Configuration with Azure Databricks
15
DBT Snapshots with Azure Databricks and ADLS Gen2
16
DBT Data Marts with Azure Databricks and ADLS Gen2
17
DBT Documentation
18
Outro
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Embark on a comprehensive end-to-end data engineering project in this nearly two-hour video tutorial. Learn to build robust data pipelines using Apache Spark, Azure Databricks, and Data Build Tool (DBT) with Azure as the cloud provider. Follow along as the instructor guides you through data ingestion into a lakehouse, data integration with Azure Data Factory, and data transformation using Databricks and DBT. Gain hands-on experience setting up resource groups, implementing medallion architecture, configuring Azure Key Vault for secure secret management, and orchestrating data pipelines. Explore the integration of Azure Databricks with Key Vault and Data Factory, and dive into DBT setup, configuration, and advanced features like snapshots and data marts. By the end of this tutorial, you'll have a solid understanding of modern data engineering practices and be equipped to build scalable, efficient data pipelines in the cloud.

Building Robust Data Pipelines for Modern Data Engineering - End-to-End Project

CodeWithYu
Add to list
0:00 / 0:00