-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[HUDI-3997] Add 0.11.0 release notes #5466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
xushiyan
commented
Apr 29, 2022

vinothchandar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to read it all together
nsivabalan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 more comment
| In 0.11.0, a new `hudi-utilities-slim-bundle` is added to exclude dependencies that could cause conflicts and | ||
| compatibility issues with other frameworks such as Spark. | ||
|
|
||
| - `hudi-utilities-slim-bundle` works with Spark 3.1 and 2.4. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xushiyan we should explicitly call out that using slim bundle requires also adding spark-bundle into the mix
| ### Write Commit Callback for Pulsar | ||
|
|
||
| Hudi users can use `org.apache.hudi.callback.HoodieWriteCommitCallback` to invoke callback function upon successful | ||
| commits. In 0.11.0, we add`HoodieWriteCommitPulsarCallback` in addition to the existing HTTP callback and Kafka |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: missing space
| with [BigQuery integration](/docs/next/gcp_bigquery). | ||
| - For Spark readers that rely on extracting physical partition path, | ||
| set `hoodie.datasource.read.extract.partition.values.from.path=true` to stay compatible with existing behaviors. | ||
| - Default index type for Spark was change from `BLOOM` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: changed