Главная
Study mode:
on
1
Introduction
2
How big is a big database
3
What will we do
4
Components
5
Connection Pool
6
Peach Bouncer
7
Configuration
8
Data Moving
9
Data Replication
10
Processing Sample
11
Optimizing
12
Benchmark
13
Business Intelligence Queries
14
Getting Data from Other Databases
15
Data in Memory
Description:
Explore techniques for managing and processing massive datasets using Python and PostgreSQL in this 57-minute EuroPython 2013 conference talk. Delve into the components of large-scale data warehousing, including connection pooling, data replication, and optimization strategies. Learn how to configure and utilize tools like Peach Bouncer for efficient data handling. Discover methods for moving and processing data samples, optimizing performance, and conducting benchmarks. Gain insights into executing business intelligence queries and retrieving data from various database sources. Understand the intricacies of working with in-memory data structures to enhance processing speed and efficiency in huge data warehouse environments.

Python and PostgreSQL for Huge Data Warehouses

EuroPython Conference
Add to list