Wordsmith introduction section#2
Merged
hhhizzz merged 2 commits intohhhizzz:lm-pipeline-blogfrom Dec 7, 2025
Merged
Conversation
|
Preview URL: https://alamb.github.io/arrow-site If the preview URL doesn't work, you may forget to configure your fork repository for preview. |
alamb
commented
Dec 5, 2025
| --> | ||
|
|
||
| This article dives into the decisions and pitfalls of Late Materialization in `arrow-rs` (the engine powering DataFusion). We'll see how a humble file reader has evolved into something with the complex logic of a query engine—effectively becoming a **tiny query engine** in its own right. | ||
| This article dives into the decisions and pitfalls of implementing Late Materialization in the [Apache Parquet] reader from [`arrow-rs`] (the reader powering [Apache DataFusion] among other projects). We'll see how a seemingly humble file reader requires complex logic to evaluate predicates—effectively becoming a **tiny query engine** in its own right. |
Author
There was a problem hiding this comment.
I added some links and reworded this slightly to provide broader context
| ## 1. Why Late Materialization? | ||
|
|
||
| Columnar reads are a constant battle between **I/O bandwidth** and **CPU decode costs**. While skipping data is generally good, the act of skipping itself carries a computational cost. The goal in `arrow-rs` is **pipeline-style late materialization**: evaluate predicates first, then access projected columns, keeping the pipeline tight at the page level to ensure minimal reads and minimal decode work. | ||
| Columnar reads are a constant battle between **I/O bandwidth** and **CPU decode costs**. While skipping data is generally good, the act of skipping itself carries a computational cost. The goal of the Parquet reader in `arrow-rs` is **pipeline-style late materialization**: evaluate predicates first, then access projected columns. For predicates that filter many rows, materializing after evaluation minimizes reads and decode work. |
Author
There was a problem hiding this comment.
I tried to make the benefits a bit clearer
|
|
||
| 1. Read column `A`, build a `RowSelection` (a sparse mask), and obtain the initial set of surviving rows. | ||
| 2. Use that `RowSelection` to read column `B`, decoding and filtering on the fly to make the selection even sparser. | ||
| 1. Read column `A` and evaluate `A > 10` to build a `RowSelection` (a sparse mask) representing the initial set of surviving rows. |
Author
There was a problem hiding this comment.
I added the predicate evaluation explicitly into this example as I think that was easier to follow
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Here is some proposed "wordsmithing" changes to the introduction section of
I'll comment inline with the rationale