Skip to content

Conversation

@zhuanshenbsj1
Copy link
Contributor

@zhuanshenbsj1 zhuanshenbsj1 commented Dec 7, 2022

Change Logs

In some scenes, such as offline clustering or online sync clustering(Parallel), later plans could completed before previous plans. If later plan belong to the next partation, and cleaning speed catch up with it,Incremental Cleaning mode for getPartitionPathsForIncrementalCleaning would ignore previous plans ,eventually lead to duplicate data.

partation : day=2022-12-06/hour=10/minute_per_10=0 , day=2022-12-06/hour=10/minute_per_10=1
image

cleaning catch up with clustering belong to partation day=2022-12-06/hour=10/minute_per_10=1
log:
image
instant:
image

the file in previous clustering plan belong to partation 2022-12-06/hour=10/minute_per_10=0 won't clean forever. and it will cause duplicate data when the instants archived.
image

image

Impact

Describe any public API or user-facing feature change or any performance impact.

Risk level (write none, low medium or high below)

If medium or high, explain what verification was done to mitigate the risks.

Documentation Update

Describe any necessary documentation update if there is any new feature, config, or user-facing change

  • The config description must be updated if new configs are added or the default value of the configs are changed
  • Any new feature or user-facing change requires updating the Hudi website. Please create a Jira ticket, attach the
    ticket number here and follow the instruction to make
    changes to the website.

Contributor's checklist

  • Read through contributor's guide
  • Change Logs and Impact were stated clearly
  • Adequate tests were added if applicable
  • CI passed

@hudi-bot
Copy link
Collaborator

hudi-bot commented Dec 8, 2022

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@danny0405
Copy link
Contributor

Somehow i got the point why the data set duplication occurs, it is not because of the out of order execution of clustering, but because the fs view with clustering instants relies on the replace commit metadata to composite the file snapshots with replaced file handles, if we archive a clustering instant that has not been cleaned yet, the replace commit metadata is gone and the duplicates happens.

One gold rule for the clustering archiving is: we can only archive the commit only when we make sure it's replaced instant has been cleaned successfully, while this should be very hard i guess because one clustering commit may replace multiple normal commits. A better way is to look up the archive timeline when there are clustering instants on the timeline.

WDYT, @nsivabalan :)

@danny0405 danny0405 added writer-core priority:blocker Production down; release blocker area:table-service Table services labels Dec 8, 2022
@zhuanshenbsj1
Copy link
Contributor Author

Somehow i got the point why the data set duplication occurs, it is not because of the out of order execution of clustering, but because the fs view with clustering instants relies on the replace commit metadata to composite the file snapshots with replaced file handles, if we archive a clustering instant that has not been cleaned yet, the replace commit metadata is gone and the duplicates happens.

One gold rule for the clustering archiving is: we can only archive the commit only when we make sure it's replaced instant has been cleaned successfully, while this should be very hard i guess because one clustering commit may replace multiple normal commits. A better way is to look up the archive timeline when there are clustering instants on the timeline.

WDYT, @nsivabalan :)

How abourt add a check before archive, require every instant to archive completed time must earlier than the latest cleaning instant start clean time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:table-service Table services priority:blocker Production down; release blocker

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants