fix(rust/sedona-spatial-join): prevent filter pushdown past KNN joins#611
Merged
Kontinuation merged 5 commits intoFeb 18, 2026
Merged
Conversation
KNN joins have different semantics than regular spatial joins — pushing filters to the object (build) side changes which objects are the k nearest neighbors, producing incorrect results. Add KnnJoinEarlyRewrite optimizer rule that converts KNN joins to SpatialJoinPlanNode extension nodes before DataFusion's PushDownFilter rule runs, since extension nodes naturally block filter pushdown via prevent_predicate_push_down_columns(). Rule ordering: MergeSpatialProjectionIntoJoin → KnnJoinEarlyRewrite → PushDownFilter → ... → SpatialJoinLogicalRewrite (for non-KNN joins). Closes apache#605
cde2b7f to
d6bc0d7
Compare
…properly with extension node
Kontinuation
commented
Feb 17, 2026
Comment on lines
+109
to
+116
| fn necessary_children_exprs(&self, _output_columns: &[usize]) -> Option<Vec<Vec<usize>>> { | ||
| // Request all columns from both children. This ensures the optimizer | ||
| // recurses into children while preserving all columns needed by the | ||
| // join filter and output schema. | ||
| let left_indices: Vec<usize> = (0..self.left.schema().fields().len()).collect(); | ||
| let right_indices: Vec<usize> = (0..self.right.schema().fields().len()).collect(); | ||
| Some(vec![left_indices, right_indices]) | ||
| } |
Member
Author
There was a problem hiding this comment.
This is for working around a bug in DataFusion. I'll submit a patch later.
Contributor
There was a problem hiding this comment.
Pull request overview
Adds an early logical optimizer rewrite to ensure KNN joins are converted into SpatialJoinPlanNode extension nodes before DataFusion’s PushDownFilter runs, preventing incorrect filter pushdown onto the KNN build side.
Changes:
- Insert
MergeSpatialFilterIntoJoin+ newKnnJoinEarlyRewritebeforePushDownFilter, and runSpatialJoinLogicalRewriteafter it for non-KNN joins. - Remove
is_spatial_predicateand update predicate-name collection tests accordingly. - Add integration tests asserting object-side filter pushdown is blocked for KNN joins but allowed for non-KNN spatial joins; add
necessary_children_exprsto the extension node.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| rust/sedona-spatial-join/src/planner/optimizer.rs | Adds KnnJoinEarlyRewrite, reorders optimizer rules around PushDownFilter, refactors join→extension rewrite helper. |
| rust/sedona-spatial-join/src/planner/logical_plan_node.rs | Adds necessary_children_exprs implementation for SpatialJoinPlanNode. |
| rust/sedona-spatial-join/src/planner/spatial_expr_utils.rs | Removes is_spatial_predicate and updates tests to validate collect_spatial_predicate_names. |
| rust/sedona-spatial-join/tests/spatial_join_integration.rs | Expands KNN filter correctness coverage and adds physical-plan assertions for filter pushdown behavior. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
e8c0c85 to
b4381c3
Compare
github-merge-queue Bot
pushed a commit
to apache/datafusion
that referenced
this pull request
Mar 6, 2026
…nTableProvider::scan (#20393) ## Which issue does this PR close? N/A (newly discovered bug) This is originally found in apache/sedona-db when working on a custom plan node: apache/sedona-db#611 (comment) ## Rationale for this change `ForeignTableProvider::scan()` converts a `None` projection (meaning "return all columns") into an empty `RVec<usize>` before passing it across the FFI boundary. On the receiving side, `scan_fn_wrapper` always wraps the received `RVec` in `Some(...)`, passing `Some(&vec![])` to the inner `TableProvider::scan()`. This means "project zero columns" — the exact opposite of the intended "project all columns." The root cause is that the `FFI_TableProvider::scan` function signature uses `RVec<usize>` for the projections parameter. Since `RVec<usize>` cannot represent `None`, the `None` vs. empty-vec distinction is lost at the FFI boundary. ## What changes are included in this PR? Three coordinated changes in `datafusion/ffi/src/table_provider.rs`: 1. **FFI struct definition**: Changed `scan` function pointer signature from `RVec<usize>` to `ROption<RVec<usize>>` for the projections parameter, matching how `limit` already uses `ROption<usize>` for the same `None`-vs-value distinction. 2. **Receiver side** (`scan_fn_wrapper`): Converts `ROption<RVec<usize>>` via `.into_option().map(...)` and passes `projections.as_ref()` to the inner provider, preserving `None` semantics. 3. **Sender side** (`ForeignTableProvider::scan`): Converts `Option<&Vec<usize>>` to `ROption<RVec<usize>>` via `.into()` instead of using `unwrap_or_default()`. Plus a new unit test `test_scan_with_none_projection_returns_all_columns` that directly exercises the FFI round-trip with `projection=None` and verifies all 3 columns are returned. Also fixed the existing `test_aggregation` test to set `library_marker_id = mock_foreign_marker_id` so it actually exercises the FFI path instead of taking the local bypass. ## How are these changes tested? - New test `test_scan_with_none_projection_returns_all_columns`: creates a 3-column MemTable, wraps it through FFI with the foreign marker set, calls `scan(None)`, and asserts 3 columns are returned (previously returned 0). ## Are these changes safe? This is a **breaking FFI ABI change** to the `FFI_TableProvider::scan` function pointer signature. However: - The `abi_stable` crate's `#[derive(StableAbi)]` generates layout checks at dylib load time, so mismatched dylibs will be caught at load rather than causing silent corruption. - All existing providers construct `FFI_TableProvider` via `::new()` or `::new_with_ffi_codec()`, which internally wire up `scan_fn_wrapper` — nobody constructs the `scan` function pointer manually.
alamb
pushed a commit
to alamb/datafusion
that referenced
this pull request
Mar 12, 2026
…nTableProvider::scan (apache#20393) ## Which issue does this PR close? N/A (newly discovered bug) This is originally found in apache/sedona-db when working on a custom plan node: apache/sedona-db#611 (comment) ## Rationale for this change `ForeignTableProvider::scan()` converts a `None` projection (meaning "return all columns") into an empty `RVec<usize>` before passing it across the FFI boundary. On the receiving side, `scan_fn_wrapper` always wraps the received `RVec` in `Some(...)`, passing `Some(&vec![])` to the inner `TableProvider::scan()`. This means "project zero columns" — the exact opposite of the intended "project all columns." The root cause is that the `FFI_TableProvider::scan` function signature uses `RVec<usize>` for the projections parameter. Since `RVec<usize>` cannot represent `None`, the `None` vs. empty-vec distinction is lost at the FFI boundary. ## What changes are included in this PR? Three coordinated changes in `datafusion/ffi/src/table_provider.rs`: 1. **FFI struct definition**: Changed `scan` function pointer signature from `RVec<usize>` to `ROption<RVec<usize>>` for the projections parameter, matching how `limit` already uses `ROption<usize>` for the same `None`-vs-value distinction. 2. **Receiver side** (`scan_fn_wrapper`): Converts `ROption<RVec<usize>>` via `.into_option().map(...)` and passes `projections.as_ref()` to the inner provider, preserving `None` semantics. 3. **Sender side** (`ForeignTableProvider::scan`): Converts `Option<&Vec<usize>>` to `ROption<RVec<usize>>` via `.into()` instead of using `unwrap_or_default()`. Plus a new unit test `test_scan_with_none_projection_returns_all_columns` that directly exercises the FFI round-trip with `projection=None` and verifies all 3 columns are returned. Also fixed the existing `test_aggregation` test to set `library_marker_id = mock_foreign_marker_id` so it actually exercises the FFI path instead of taking the local bypass. ## How are these changes tested? - New test `test_scan_with_none_projection_returns_all_columns`: creates a 3-column MemTable, wraps it through FFI with the foreign marker set, calls `scan(None)`, and asserts 3 columns are returned (previously returned 0). ## Are these changes safe? This is a **breaking FFI ABI change** to the `FFI_TableProvider::scan` function pointer signature. However: - The `abi_stable` crate's `#[derive(StableAbi)]` generates layout checks at dylib load time, so mismatched dylibs will be caught at load rather than causing silent corruption. - All existing providers construct `FFI_TableProvider` via `::new()` or `::new_with_ffi_codec()`, which internally wire up `scan_fn_wrapper` — nobody constructs the `scan` function pointer manually.
de-bgunter
pushed a commit
to de-bgunter/datafusion
that referenced
this pull request
Mar 24, 2026
…nTableProvider::scan (apache#20393) ## Which issue does this PR close? N/A (newly discovered bug) This is originally found in apache/sedona-db when working on a custom plan node: apache/sedona-db#611 (comment) ## Rationale for this change `ForeignTableProvider::scan()` converts a `None` projection (meaning "return all columns") into an empty `RVec<usize>` before passing it across the FFI boundary. On the receiving side, `scan_fn_wrapper` always wraps the received `RVec` in `Some(...)`, passing `Some(&vec![])` to the inner `TableProvider::scan()`. This means "project zero columns" — the exact opposite of the intended "project all columns." The root cause is that the `FFI_TableProvider::scan` function signature uses `RVec<usize>` for the projections parameter. Since `RVec<usize>` cannot represent `None`, the `None` vs. empty-vec distinction is lost at the FFI boundary. ## What changes are included in this PR? Three coordinated changes in `datafusion/ffi/src/table_provider.rs`: 1. **FFI struct definition**: Changed `scan` function pointer signature from `RVec<usize>` to `ROption<RVec<usize>>` for the projections parameter, matching how `limit` already uses `ROption<usize>` for the same `None`-vs-value distinction. 2. **Receiver side** (`scan_fn_wrapper`): Converts `ROption<RVec<usize>>` via `.into_option().map(...)` and passes `projections.as_ref()` to the inner provider, preserving `None` semantics. 3. **Sender side** (`ForeignTableProvider::scan`): Converts `Option<&Vec<usize>>` to `ROption<RVec<usize>>` via `.into()` instead of using `unwrap_or_default()`. Plus a new unit test `test_scan_with_none_projection_returns_all_columns` that directly exercises the FFI round-trip with `projection=None` and verifies all 3 columns are returned. Also fixed the existing `test_aggregation` test to set `library_marker_id = mock_foreign_marker_id` so it actually exercises the FFI path instead of taking the local bypass. ## How are these changes tested? - New test `test_scan_with_none_projection_returns_all_columns`: creates a 3-column MemTable, wraps it through FFI with the foreign marker set, calls `scan(None)`, and asserts 3 columns are returned (previously returned 0). ## Are these changes safe? This is a **breaking FFI ABI change** to the `FFI_TableProvider::scan` function pointer signature. However: - The `abi_stable` crate's `#[derive(StableAbi)]` generates layout checks at dylib load time, so mismatched dylibs will be caught at load rather than causing silent corruption. - All existing providers construct `FFI_TableProvider` via `::new()` or `::new_with_ffi_codec()`, which internally wire up `scan_fn_wrapper` — nobody constructs the `scan` function pointer manually.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
KNN joins have different semantics than regular spatial joins — pushing filters to the object (build) side changes which objects are the k nearest neighbors, producing incorrect results. DataFusion's builtin
PushDownFilteroptimizer rule doesn't know this and incorrectly pushes filters through KNN joins.This PR adds a
KnnJoinEarlyRewriteoptimizer rule that converts KNN joins toSpatialJoinPlanNodeextension nodes before DataFusion'sPushDownFilterrule runs. Extension nodes naturally block filter pushdown viaprevent_predicate_push_down_columns()returning all columns.Changes
KnnJoinEarlyRewriteoptimizer rule — handles two patterns:Join(filter=ST_KNN(...))— when the ON clause has only the spatial predicateFilter(ST_KNN(...), Join(on=[...]))— when the ON clause also has equi-join conditions (DataFusion's SQL planner separates these)MergeSpatialProjectionIntoJoinandKnnJoinEarlyRewriteare inserted beforePushDownFilter, whileSpatialJoinLogicalRewrite(for non-KNN joins) remains after so non-KNN joins still benefit from filter pushdownSpatialJoinLogicalRewrite— skips KNN joins (already handled by the early rewrite)Rule ordering
Follow-ups
ST_KNNto appear first in the chain of AND expressions. For instance,ST_KNN(L.geom, R.geom, 5) AND L.id = R.idhas the same semantics asL.id = R.id AND ST_KNN(L.geom, R.geom, 5). This seems to be unnatural. Optimization rule does not seem to be a good place to enforce this, so we leave it to future patches that work on SQL parser and ASTs.TODO
prevent_predicate_push_down_columnsmethod seems to do the trick. I'll experiment with it. Hopefully we can implement query side filter pushdown easily.UPDATE: No. It is a terrible idea. There's no shortcut. We have to implement the optimization rule ourselves.
Closes #605