SELECT * FROM facts
Star Schema
OLAP Cube
Data Warehousing
Modeling
Home/Study Lab/Guides/Data Warehousing & Modeling
DATA ENGINEERING GUIDE

Data Warehousing & Modeling

From dimensional modeling fundamentals to modern cloud warehouses. Learn to design schemas that power fast analytics and reliable business intelligence.

9 Sections
45 min read
Beginner to Advanced
Visual Diagrams
Begin Learning
Contents
What is a Data Warehouse? Dimensional Modeling Star vs Snowflake Warehouse Layers Cloud Warehouses Optimization Slowly Changing Dims Lakes vs Lakehouses Key Takeaways
01

What is a Data Warehouse?

Understanding the central repository that powers enterprise analytics and business intelligence.

The Central Analytical Repository

A data warehouse is a centralized repository specifically designed and optimized for analytical queries. Unlike operational databases that handle day-to-day transactions (placing orders, updating user profiles, processing payments), a data warehouse consolidates data from multiple sources into a single, consistent store optimized for reading, aggregating, and analyzing large volumes of historical data.

The concept was pioneered in the late 1980s and early 1990s by Bill Inmon and Ralph Kimball, who recognized that operational systems were poorly suited for analytical workloads. Operational databases are designed for speed and concurrency on small, frequent transactions. Analytical queries, on the other hand, scan millions or billions of rows, compute aggregations, and join many tables together. Running these heavy queries against a production database would grind operations to a halt.

A data warehouse solves this by creating a separate environment where data is restructured, cleaned, and organized for fast analytical access. It stores historical data spanning months or years, enabling trend analysis, forecasting, and data-driven decision-making across the entire organization.

OLTP vs OLAP

The fundamental distinction in database systems is between OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing). Understanding this difference is the foundation of data warehousing.

OLTP (Operational)

Optimized for transactions (INSERT, UPDATE, DELETE)

Many small queries per second

Normalized schema (3NF)

Current data only

Examples: MySQL, PostgreSQL, Oracle

Use case: Running the business (orders, users)

OLAP (Analytical)

Optimized for aggregations (SUM, AVG, GROUP BY)

Few large, complex queries

Denormalized schema (Star/Snowflake)

Historical data (years of history)

Examples: Snowflake, BigQuery, Redshift

Use case: Analyzing the business (reports, dashboards)

Inmon vs Kimball

Bill Inmon (the "Father of Data Warehousing") advocates a top-down approach: build a centralized, enterprise-wide data warehouse in third normal form (3NF), then create departmental data marts from it. Ralph Kimball advocates a bottom-up approach: build dimensional data marts (star schemas) first, then integrate them into a conformed warehouse using shared dimensions. In practice, most modern organizations adopt a hybrid approach, combining elements of both methodologies. Kimball's dimensional modeling is the dominant paradigm for the presentation layer, while Inmon's philosophy influences the integration layer.

02

Dimensional Modeling

The art and science of organizing data into facts and dimensions for fast, intuitive analytics.

Facts and Dimensions

Dimensional modeling is a data modeling technique optimized for querying and reporting. Developed by Ralph Kimball, it organizes data into two fundamental types: fact tables that store measurable business events, and dimension tables that provide the descriptive context around those events.

Think of it this way: fact tables answer "how much?" or "how many?", while dimension tables answer "who?", "what?", "where?", and "when?". Together, they form the backbone of every data warehouse, enabling analysts to slice, dice, drill down, and roll up data across any combination of business dimensions.

The beauty of dimensional modeling is its simplicity. Unlike highly normalized schemas that require dozens of joins, a dimensional model typically requires only one join per dimension. This predictable structure makes queries fast, intuitive for business users, and easy to maintain over time.

Core Building Blocks

Fact Tables

Contain measurable events -- sales, clicks, transactions. Numeric columns (amount, quantity, revenue). Foreign keys to dimension tables. Typically very large with millions or billions of rows. Each row represents a single business event at a specific grain.

Dimension Tables

Contain descriptive context -- who, what, where, when. Text columns (name, category, region, status). Relatively small with thousands or millions of rows. Provide the labels, hierarchies, and attributes that analysts use to filter and group facts.

A Concrete Example: Retail Sales

Consider a retail company that wants to analyze its sales. Here is how we would model the data dimensionally:

1

Fact: Sales

order_id, customer_key, product_key, date_key, store_key, quantity, amount, discount

2

Dim: Customer

customer_key, name, email, segment, city, country, join_date

3

Dim: Product

product_key, name, category, subcategory, brand, price

4

Dim: Date

date_key, date, day_name, month, quarter, year, is_weekend, is_holiday

5

Dim: Store

store_key, store_name, city, state, country, region, manager

Define the Grain First

The grain of a fact table is the most critical decision in dimensional modeling. It defines what one row represents. For a sales fact, the grain might be "one line item per order per product." Always define the grain FIRST. If you get the grain wrong, everything built on top of it will be wrong. A finer grain (more detail) is almost always better -- you can always aggregate up, but you cannot drill down below your grain.

03

Star vs Snowflake Schema

Two approaches to organizing dimensions -- and why one is almost always the better choice.

Schema Comparison

The two primary schema patterns for dimensional models are the Star Schema and the Snowflake Schema. They differ in how dimension tables are structured -- flat and denormalized versus normalized into sub-dimensions.

Star Schema

Fact table at center, dimensions directly connected

Dimensions are denormalized (flat tables)

Fewer joins = faster queries

More storage (duplicated data in dimensions)

Simpler to understand and query

Best for: Industry standard for BI/analytics

Snowflake Schema

Fact table at center, dimensions normalized into sub-dimensions

Less storage (no duplication)

More joins = slightly slower queries

More complex to query and maintain

Better for very large dimension tables

Best for: When storage cost matters

Star Schema Visual

In a star schema, the fact table sits at the center with dimension tables radiating outward like the points of a star. Each dimension connects directly to the fact table with a single join. Click on any table to see its columns.

fact_sales order_id | customer_key product_key | date_key store_key | quantity amount | discount 👤 dim_customer customer_key (PK) name | email | city phone | segment 📦 dim_product product_key (PK) name | category price | brand 📅 dim_date date_key (PK) full_date | day_of_week month | quarter | year 🏪 dim_store store_key (PK) store_name | city region | type FK: customer_key FK: product_key FK: date_key FK: store_key

Star Schema Wins in Practice

In practice, most organizations use a Star Schema. The performance benefits of fewer joins almost always outweigh the storage savings of snowflaking. Cloud warehouses like Snowflake and BigQuery have cheap storage but expensive compute -- making Star Schema the clear winner. Snowflaking adds query complexity, confuses business users, and slows down BI tools. Reserve snowflaking only for extremely large dimensions (millions of rows) where the storage savings are significant.

04

Data Warehouse Layers

Organizing your warehouse into staging, integration, and presentation layers for reliability and clarity.

The Medallion Architecture

A well-designed data warehouse is organized into distinct layers, each serving a specific purpose. This layered approach -- often called the Medallion Architecture (Bronze, Silver, Gold) -- ensures data quality improves as it moves through the pipeline, and that each layer serves the right audience.

Staging

Bronze -- Raw Data Landing Zone

Temporary storage. Exact copy from source systems. Truncate-and-reload or append patterns. No transformations applied. Preserves the raw truth for auditability and reprocessing.

Integration

Silver -- Cleaned & Conformed

Data is deduplicated, validated, and standardized. Business keys are applied. Data from multiple sources is integrated. History is tracked using slowly changing dimensions. This is the "single source of truth."

Presentation

Gold -- Business-Ready

Star schemas, aggregated facts, and KPI tables live here. This is what analysts and dashboards query. Optimized for speed and ease of use. Business logic is applied and documented.

Click a layer to see sample data at that stage

The Data Vault Approach

Data Vault is an alternative modeling methodology designed for the integration layer. Created by Dan Linstedt, it provides maximum flexibility, auditability, and scalability for enterprise data warehouses. It separates structure from context using three core components:

1

Hubs

Hubs contain business keys -- the core identifiers for business entities. A customer hub holds customer IDs, a product hub holds product codes. Hubs are immutable: once a business key is inserted, it never changes. They represent the core "things" in your business.

2

Links

Links capture the relationships between hubs. A customer-order link connects customers to their orders. An order-product link connects orders to products. Links are also immutable and model many-to-many relationships naturally. They represent the "transactions" or "interactions" in your business.

3

Satellites

Satellites hold the descriptive attributes and their history. They attach to hubs or links and track changes over time. A customer satellite might store name, email, and address with timestamps. When an attribute changes, a new satellite row is added -- full history is preserved automatically.

Choosing the Right Approach

Star Schema is best for presentation layers -- it is fast, intuitive, and works perfectly with BI tools. Data Vault is best for integration layers when you need full auditability, flexibility, and the ability to accommodate source system changes without remodeling. Many modern warehouses use Data Vault in the Silver layer and Star Schema in the Gold layer, combining the strengths of both approaches.

05

Modern Cloud Warehouses

The platforms powering modern analytics -- from serverless to lakehouse architectures.

Cloud Data Warehouse Ecosystem

The cloud has transformed data warehousing. Gone are the days of provisioning hardware, managing servers, and capacity planning months in advance. Modern cloud warehouses offer elastic scaling, pay-per-use pricing, and near-zero maintenance. Here are the leading platforms:

Snowflake

cloud

Separates compute and storage. Auto-scaling virtual warehouses. Near-zero maintenance. Leader in cloud data warehousing.

Google BigQuery

cloud

Serverless analytics. Pay per query scanned. Excellent for ad-hoc analysis. Scales to petabytes effortlessly.

Amazon Redshift

cloud

AWS-native data warehouse. Columnar storage. Provisioned or serverless modes. Excellent AWS ecosystem integration.

Databricks SQL

cloud

Lakehouse approach built on Delta Lake. Unified analytics and ML workflows. SQL warehouse for BI workloads.

Azure Synapse

cloud

Microsoft's unified analytics service. Deep Power BI integration. Combines SQL pools with Apache Spark.

ClickHouse

storage

Open-source columnar database. Blazing fast aggregations. Self-hosted or managed cloud. Great for real-time analytics.

Key Cloud Warehouse Concepts

Understanding these architectural concepts is essential for working with any modern cloud data warehouse:

Separation of Compute & Storage

Scale each independently. Pay for what you use. Store petabytes cheaply in object storage and spin up compute only when running queries. No over-provisioning. This is the single most important architectural innovation in modern warehousing.

Columnar Storage

Store data by column instead of by row. Only read the columns your query needs. Achieve 5-10x compression because similar values compress well. Dramatically faster for aggregation queries that touch a few columns across millions of rows.

MPP Architecture

Massively Parallel Processing distributes query execution across many compute nodes simultaneously. Each node processes a slice of the data in parallel. Enables horizontal scaling -- add more nodes to handle bigger data or more concurrent queries.

06

Data Partitioning & Optimization

Techniques to make your warehouse queries run orders of magnitude faster.

Optimization Techniques

Even with the power of modern cloud warehouses, poorly optimized tables and queries can be painfully slow and expensive. These techniques can reduce query times from minutes to seconds and cut your compute costs dramatically:

1

Partitioning

Split large tables by date, region, or category. When a query filters on the partition column, the engine skips irrelevant partitions entirely. Example: PARTITION BY (order_date). A query for January data only scans January's partition, not the entire table.

2

Clustering / Sort Keys

Order data within partitions by frequently filtered or joined columns. Co-locates related rows on disk. Enables faster range scans and more efficient pruning. In Snowflake this is called "clustering keys"; in Redshift, "sort keys."

3

Materialized Views

Pre-computed query results stored as tables. Refresh periodically or incrementally. Trade storage space for query speed. Perfect for dashboards with known, repeated queries. Some platforms auto-route queries to materialized views.

4

Columnar Compression

Dictionary encoding, run-length encoding, delta encoding, and other techniques achieve 5-10x compression ratios. Compressed data means less I/O, faster scans, and lower storage costs. Modern warehouses handle this automatically.

5

Query Caching

Cache frequent query results in memory. Identical queries return instantly without re-scanning data. Most cloud warehouses cache results for 24 hours. Saves compute costs on repeated dashboard refreshes.

Optimization in Practice

Here is an example of creating a partitioned and clustered table, followed by a query that benefits from partition pruning:

optimized_table.sql
-- Create a partitioned, clustered table CREATE TABLE warehouse.fact_orders ( order_id BIGINT, customer_key INT, product_key INT, order_date DATE, amount DECIMAL(12,2), quantity INT ) PARTITION BY (order_date) CLUSTER BY (customer_key); -- Query only scans relevant partitions SELECT customer_key, SUM(amount) FROM warehouse.fact_orders WHERE order_date BETWEEN '2025-01-01' AND '2025-12-31' GROUP BY customer_key;

The 80/20 Rule of Optimization

80% of performance issues come from 20% of queries. Profile your slowest and most expensive queries first. Partitioning alone can reduce scan time by 90% or more. Before adding complex optimizations, check the basics: Are you selecting only the columns you need? Are you filtering on partition keys? Are you avoiding SELECT *? Start simple, measure, then optimize.

07

Slowly Changing Dimensions (SCD)

How to handle dimension attributes that change over time -- preserving history without breaking your model.

The Problem of Changing Data

Dimension data is not static. Customers move to new cities, products get reclassified, employees change departments. When these changes happen, how do you update your dimension tables? Do you overwrite the old value and lose history? Or do you keep both the old and new values? The answer depends on your business requirements, and the solution is Slowly Changing Dimensions (SCD).

SCDs are a set of well-defined patterns for handling changes to dimension attributes. The three most common types address different trade-offs between simplicity, storage, and historical accuracy. Choosing the right SCD type for each attribute is a critical design decision that affects how analysts can query and interpret historical data.

SCD Types

Type 1 (Overwrite)

Simply update the existing value in place. No history is kept. The simplest approach. Use for corrections (fixing a typo in customer name) or attributes where history does not matter (e.g., updating a phone number).

Type 2 (Add Row)

Add a new row with the new value and mark the old row as inactive. Full history is preserved. The most common and most important SCD type. Use for any attribute where historical accuracy matters (e.g., customer city, product category).

Type 3 (Add Column)

Add a "previous" column alongside the current column. Limited history (only one prior change is tracked). Rarely used in practice. Useful when you only need to compare current versus previous value (e.g., previous_region vs current_region).

SCD Type 2 in Detail

Type 2 is the gold standard for tracking historical changes. Here is how it works step by step:

1

Change Detected

Customer moves from Karachi to Lahore. The ETL process detects that the city attribute has changed between the source system and the warehouse.

2

Expire Old Row

Old row: customer_key=101, city=Karachi, valid_from=2020-01-01, valid_to=2025-03-01, is_current=false. The valid_to date is set to the change date and is_current is flipped to false.

3

Insert New Row

New row: customer_key=101, city=Lahore, valid_from=2025-03-01, valid_to=9999-12-31, is_current=true. The new row captures the current state with an open-ended valid_to date.

4

Historical Queries

To query what city a customer was in at any point in time, JOIN on the valid_from and valid_to date range. This is called a "point-in-time" join and is essential for accurate historical reporting.

5

Current Queries

For current state queries, simply filter on is_current=true. This gives you the latest version of every dimension row without needing to worry about date ranges.

SCD Type 2 Implementation

Here is a MERGE statement that implements SCD Type 2 logic. It expires changed rows and inserts new versions:

scd_type2_merge.sql
MERGE INTO dim_customer AS target USING staging_customer AS source ON target.customer_id = source.customer_id AND target.is_current = TRUE WHEN MATCHED AND target.city != source.city THEN UPDATE SET valid_to = CURRENT_DATE, is_current = FALSE WHEN NOT MATCHED THEN INSERT (customer_id, name, city, valid_from, valid_to, is_current) VALUES (source.customer_id, source.name, source.city, CURRENT_DATE, '9999-12-31', TRUE);

🔄 Interactive SCD Type 2 Demo

Watch how SCD Type 2 handles a customer address change step by step. Click the buttons in order.

customer_keycustomer_idnamecityvalid_fromvalid_tois_current
1001C-42Ali AhmedKarachi2023-01-159999-12-31TRUE
⚠️ Change Detected: Customer "Ali Ahmed" moved from Karachi → Lahore
08

Data Lakes vs Lakehouses

The evolution from warehouses to lakes to lakehouses -- and why lakehouses are the future.

Three Paradigms Compared

The data architecture landscape has evolved through three major paradigms. Each addresses different limitations of its predecessor, and understanding their trade-offs is essential for making sound architectural decisions.

Data Warehouse

Structured data only

Schema-on-write (define before load)

Expensive storage, fast queries

SQL-first approach

Strong governance and ACID transactions

Best for: BI and reporting

Data Lake

Any data format (structured + unstructured)

Schema-on-read (define when querying)

Cheap storage (object stores like S3)

File-based processing

Weak governance, risk of "data swamp"

Best for: ML and exploration

Data Lakehouse

Best of both worlds

Open formats (Parquet, Delta, Iceberg)

ACID transactions on object storage

SQL + ML in one platform

Strong governance + lineage

Best for: The future of data architecture

Architecture Comparison

Click the tabs below to see how data flows through each architecture pattern:

Structured Data Only Schema Gate Schema-on-Write Columnar Storage SQL Engine

Only structured data enters. Strong schema enforcement. Optimized for SQL analytics.

All Data CSV, JSON, Images Logs, Video... Object Storage S3 / ADLS / GCS No Schema Enforcement Various Engines Spark Presto/Trino ML / Python Hive SQL

All data formats accepted. No schema gate. Risk of becoming a "data swamp" without governance.

All Data Structured + Unstructured Open Table Format Delta / Iceberg / Hudi ACID Schema Unified Engine SQL / BI ML / AI Streaming

Best of both worlds. Open formats + ACID transactions + unified SQL & ML engine.

Lakehouse Table Formats

The Lakehouse paradigm is enabled by open table formats that bring warehouse-like capabilities to data lake storage. These formats add ACID transactions, schema enforcement, and time travel to cheap object storage like S3, ADLS, or GCS:

Delta Lake

storage

Open-source storage layer by Databricks. ACID transactions on S3/ADLS. Time travel for querying historical snapshots. Schema enforcement and evolution.

Apache Iceberg

storage

Open table format for huge analytic datasets. Hidden partitioning eliminates manual partition management. Schema evolution without rewriting data. Broad engine support.

Apache Hudi

storage

Hadoop Upserts Deletes and Incrementals. Efficient upserts on data lakes. Incremental processing pipelines. Record-level insert, update, and delete operations.

The Lakehouse Convergence

The industry is converging on the Lakehouse pattern. Databricks, Snowflake, Google, and Amazon are all building Lakehouse capabilities. Apache Iceberg has emerged as the leading open table format, with broad support across engines. If you are starting fresh, strongly consider a Lakehouse architecture -- you get the cost efficiency of a data lake with the reliability and performance of a data warehouse, all on open formats that prevent vendor lock-in.

09

Key Takeaways

The essential principles of data warehousing and modeling -- your foundation for building reliable analytics.

Summary

Data warehousing and dimensional modeling are foundational skills for any data engineer, analytics engineer, or BI professional. Here are the key principles to carry forward:

1. OLAP, Not OLTP

A Data Warehouse is optimized for analytical queries -- aggregations, joins, and historical analysis. It is fundamentally different from operational databases. Never run heavy analytics against your production OLTP systems.

2. Facts + Dimensions

Dimensional modeling (Fact Tables + Dimension Tables) is the foundation of good warehouse design. Facts hold the measurable events, dimensions provide the context. Define the grain first, then build everything around it.

3. Star Schema Wins

Star Schema beats Snowflake Schema for most use cases. Fewer joins mean faster queries, simpler SQL, and happier analysts. Cloud storage is cheap; compute is expensive. Optimize for fewer joins.

4. Layered Architecture

Warehouse layers (Staging, Integration, Presentation) keep data organized and reliable. Bronze for raw data, Silver for cleaned and conformed data, Gold for business-ready analytics. Each layer serves a different purpose and audience.

5. Cloud-Native Scaling

Cloud warehouses separate compute and storage -- scale each independently and pay only for what you use. Columnar storage and MPP architecture enable fast aggregations across billions of rows.

6. SCD Type 2 for History

SCD Type 2 is the gold standard for tracking history in dimensions. It preserves full change history with valid_from, valid_to, and is_current columns. Get this right and your historical reporting will always be accurate.

7. Lakehouses Are the Future

Lakehouses combine the best of data lakes (cheap storage, open formats, ML support) and data warehouses (ACID transactions, governance, fast SQL). With open table formats like Apache Iceberg and Delta Lake, you get warehouse reliability on lake economics.

Continue Your Journey