Skip to content

Conversation

@sryza
Copy link
Contributor

@sryza sryza commented Oct 6, 2025

What changes were proposed in this pull request?

  • In DefineDataset, pulls out all the properties that are table-specific into their own sub-message.
  • In DefineFlow, pulls out the relation property into its own sub-message.

Why are the changes needed?

The DefineDataset and DefineFlow Spark Connect protos are moshpits of properties that could be refactored into a more coherent structure:

  • In DefineDataset, there are a set of properties that are only relevant to tables (not views).
  • In DefineFlow, the relation property refers to flows that write the results of a relation to a target table. In the future, we may want to introduce additional flows types that mutate the target table in different ways.

Does this PR introduce any user-facing change?

No, these protos haven't been shipped yet.

How was this patch tested?

Updated existing tests.

Was this patch authored or co-authored using generative AI tooling?

Copy link
Contributor

@SCHJonathan SCHJonathan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly LGTM with one minor comment

optional SourceCodeLocation source_code_location = 6;

oneof details {
WriteRelationFlowDetails relation_flow_details = 7;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

open to discussion, wdyt we define a ImplicitFlowDetails that only has relation, and StandaloneFlowDetails that has flow_name, relation and potential optional bool once in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I had been thinking about it, the "details" are conceptually independent from whether the flow is implicitly specified as part of the dataset definition. The details describe the computation that happens during flow execution.

Even though our current APIs don't permit it, if we add new flow types in the future, I think we might want the option to specify them either...

  • As part of the statement that defines the table
  • As a standalone flow

E.g.

CREATE STREAMING TABLE t1
AS MERGE ...

or

CREATE STREAMING TABLE t1

CREATE FLOW f1
AS MERGE INTO t1 ...

And it would be good to only need to add one MergeIntoFlowDetails flow in this case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The details describe the computation that happens during flow execution.

Agreed, and the direction I’m proposing is aligned with that philosophy. The idea is to categorize flow definitions with a finer granularity while keeping the schema extensible for future flow types (e.g., MergeIntoFlow). This approach preserves the semantic separation between implicit and standalone flows while allowing us to evolve the protocol incrementally as new flow behaviors emerge.

message DefineFlow {    
     ...
 
     oneof details {
          // Can be planned as a CompleteFlow, StreamingFlow, ExternalSinkFlow, and ForeachBatchSinkFlow
          // depends on the dataset it writes to
          ImplicitlyFlow implicitFlow = 1;
          // can either be a AppendFlow or UpdateFlow
          StandaloneFlow standalone_flow = 2;
          AutoCdcFlow auto_cdc_flow = 3;
          AutoCdcFromSnapshotFlow auto_cdc_flow4;
          Any extension = 999;
     }
     
     // implicit flow only have the unresolved logical plan, it doesn't have a user specified flow name 
     // as it is inferred from the dataset it writes to
     message ImplicitlyFlow {
           optional spark.connect.Relation relation = 1;
     }

     message StandaloneFlow {
           optional spark.connect.Relation relation = 1;
           // user-specified flow name for standalone flow
           optional string name = 2;
           // whether it is a one time operation
           optional bool once = 3;
           // can either be Append or Update
           optional outputMode = 4;
     }
}

Copy link
Contributor Author

@sryza sryza Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is that I want to avoid something like...

oneof details {
  StandaloneWriteRelationFlowDetails standalone_write_relation = 1;
  ImplicitWriteRelationFlowDetails implicit_write_relation = 2;
  StandaloneMergeIntoFlowDetails standalone_merge_into = 3;
  ImplicitMergeIntoFlowDetails implicit_merge_into = 4;
  ...
}

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, LGTM from my side. Thank you, @sryza and all.

@dongjoon-hyun
Copy link
Member

Could you resolve the conflicts, @sryza ?

@sryza sryza force-pushed the table-flow-details branch from 9cef21b to a3c61f7 Compare October 8, 2025 17:03
@dongjoon-hyun
Copy link
Member

Thank you for rebasing.

@apache apache deleted a comment from SCHJonathan Oct 8, 2025
@apache apache deleted a comment from SCHJonathan Oct 8, 2025
@sryza sryza force-pushed the table-flow-details branch from a3c61f7 to db20995 Compare October 8, 2025 20:09
@dongjoon-hyun
Copy link
Member

dongjoon-hyun commented Oct 8, 2025

Oh, @sryza . In general, we don't delete the ASF community assets (against the author's permissions) although we have a super-power permission in GitHub. It's like the ASF mailing list which we use it as an archive for community communication. If it's inappropriate for the community, you had better ask them to delete those comments by themselves.

Screenshot 2025-10-08 at 13 45 26

@sryza
Copy link
Contributor Author

sryza commented Oct 8, 2025

@dongjoon-hyun my bad! In this case, I talked to @SCHJonathan before deleting. In the future, I'll leave it to others to manage their own comments and avoid manually doing anything like this myself

@dongjoon-hyun
Copy link
Member

Ya, I thought like that exactly. No problem at all. I just wanted to leave a note to clarify because those kind of records are misleading in the community. 😄

@sryza
Copy link
Contributor Author

sryza commented Oct 8, 2025

Fixed the conflicts – now merging to master

@sryza sryza closed this in 22e24df Oct 8, 2025
@dongjoon-hyun
Copy link
Member

Thank you!

dongjoon-hyun added a commit to apache/spark-connect-swift that referenced this pull request Oct 27, 2025
…th `4.1.0-preview3` RC1

### What changes were proposed in this pull request?

This PR aims to update Spark Connect-generated Swift source code with Apache Spark `4.1.0-preview3` RC1.

### Why are the changes needed?

There are many changes between Apache Spark 4.1.0-preview2 and preview3.

- apache/spark#52685
- apache/spark#52613
- apache/spark#52553
- apache/spark#52532
- apache/spark#52517
- apache/spark#52514
- apache/spark#52487
- apache/spark#52328
- apache/spark#52200
- apache/spark#52154
- apache/spark#51344

To use the latest bug fixes and new messages to develop for new features of `4.1.0-preview3`.

```
$ git clone -b v4.1.0-preview3 https://github.com/apache/spark.git
$ cd spark/sql/connect/common/src/main/protobuf/
$ protoc --swift_out=. spark/connect/*.proto
$ protoc --grpc-swift_out=. spark/connect/*.proto

// Remove empty GRPC files
$ cd spark/connect
$ grep 'This file contained no services' * | awk -F: '{print $1}' | xargs rm
```

### Does this PR introduce _any_ user-facing change?

Pass the CIs.

### How was this patch tested?

Pass the CIs. I manually tested with `Apache Spark 4.1.0-preview3` (with the two SDP ignored tests).

```
$ swift test --no-parallel
...
✔ Test run with 203 tests in 21 suites passed after 19.088 seconds.
```
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #252 from dongjoon-hyun/SPARK-54043.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit to huangxiaopingRD/spark that referenced this pull request Nov 25, 2025
…oup related properties and future-proof

### What changes were proposed in this pull request?

- In `DefineDataset', pulls out all the properties that are table-specific into their own sub-message.
- In `DefineFlow`, pulls out the `relation` property into its own sub-message.

### Why are the changes needed?
The DefineDataset and DefineFlow Spark Connect protos are moshpits of properties that could be refactored into a more coherent structure:

- In `DefineDataset`, there are a set of properties that are only relevant to tables (not views). They can be
- In `DefineFlow`, the relation property refers to flows that write the results of a relation to a target table. In the future, we may want to introduce additional flows types that mutate the target table in different ways.

### Does this PR introduce _any_ user-facing change?

No, these protos haven't been shipped yet.

### How was this patch tested?

Updated existing tests.

### Was this patch authored or co-authored using generative AI tooling?

Closes apache#52532 from sryza/table-flow-details.

Lead-authored-by: Sandy Ryza <[email protected]>
Co-authored-by: Sandy Ryza <[email protected]>
Signed-off-by: Sandy Ryza <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants