-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[HUDI-4083] Fix the flink application fails to start due to uncomplet… #5556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@danny0405 : please rope in anyone who could assist in reviewing. for now, assigned the PR to you. |
|
@hudi-bot run azure |
…ed archiving deletion
| LOG.info("Deleting instants " + archivedInstants); | ||
| boolean success = true; | ||
| List<String> instantFiles = archivedInstants.stream().map(archivedInstant -> | ||
| new Path(metaClient.getMetaPath(), archivedInstant.getFileName()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The archivedInstants seems already been sorted, because the instants from the timeline are already sorted by the instant timestamp and state, while in
hudi/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/HoodieTimelineArchiver.java
Line 465 in b10ca7e
| Map<Pair<String, String>, List<HoodieInstant>> groupByTsAction = rawActiveTimeline.getInstants() |
we group by the instant pairs but it should still keep sequence for single instant,
one instant would be sorted by state: requested -> inflight -> complete,
so in theory, the files are also cleaned in this sequence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's right. But I'm not sure if deleteFilesParallelize is cleaned in order. It use FSUtils.parallelizeFilesProcess to delete file and depend on implement in different engine.
Lines 145 to 147 in b10ca7e
| public <I, K, V> Map<K, V> mapToPair(List<I> data, SerializablePairFunction<I, K, V> func, Integer parallelism) { | |
| return data.stream().parallel().map(throwingMapToPairWrapper(func)).collect(Collectors.toMap(Pair::getLeft, Pair::getRight)); | |
| } |
For example, in flink engine context, it use parallel stream to delete file.
This is my understanding, please ping me if I am wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the parallel deletion may cause the disorder of the files.
But there is still a confusion needs to be resolved here:
The archiver only archives the old/completed instant on the timeline, while the flink writer only checks the latest inflight instant which should not be archived yet, so why the flink client is affected here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had this happen last time. Flink Application's JobManager was force killed because of memory.
Ant it left some instant file:
- 20220513170152006.commit.requested
- 20220513170152006.inflight
- 20220513170202566.commit
- 20220513170202566.commit.requested
- 20220513170202566.inflight
In this case, flink Application will execute this code:
hudi/hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/meta/CkpMetadata.java
Lines 97 to 102 in 7fb436d
| public void bootstrap(HoodieTableMetaClient metaClient) throws IOException { | |
| fs.delete(path, true); | |
| fs.mkdirs(path); | |
| metaClient.getActiveTimeline().getCommitsTimeline().filterPendingExcludingCompaction() | |
| .lastInstant().ifPresent(instant -> startInstant(instant.getTimestamp())); | |
| } |
It will scan and get last pending instant, and create a in-flight file in .hoodie/.aux/ckp_meta(20220513170152006 in this case).
The final result will throw an excpetion:
Caused by: org.apache.hudi.exception.HoodieRollbackException: Found commits after time :20220513170152006, please rollback greater commits first
Lines 179 to 183 in 7fb436d
| if (!HoodieHeartbeatClient.heartbeatExists(table.getMetaClient().getFs(), | |
| config.getBasePath(), instantTimeToRollback)) { | |
| throw new HoodieRollbackException( | |
| "Found commits after time :" + instantTimeToRollback + ", please rollback greater commits first"); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, the logic to fetch the pending instant is problematic, but the error trace you paste is mainly because when collecting the instants to rollback in line:
hudi/hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java
Line 1084 in 7fb436d
| private HoodieTimeline getInflightTimelineExcludeCompactionAndClustering(HoodieTableMetaClient metaClient) { |
We do not consider that there are inflight instants (previously completed but inflight again because of the archiving) on the timeline, cc @nsivabalan :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a patch to fix the pending instant time fetch but it does not solve your problem :)
fix.patch.zip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nsivabalan :)
|
Close because HUDI-4145 already resolved it. |
|
I will close this pr because other prs have solved similar problems. |
…ed archiving deletion
What is the purpose of the pull request
Suppose a flink application crashes while archiving, it leaves some instant files that should be deleted.
If the commit file is deleted but the in-flight file is left, when the flink application starts, it will scan and get the last pending instant and throw an exception.
So we need to delete non-commit file first, and then delete commit file.
Verify this pull request
(Please pick either of the following options)
This pull request is a trivial rework / code cleanup without any test coverage.
(or)
This pull request is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Committer checklist
Has a corresponding JIRA in PR title & commit
Commit message is descriptive of the change
CI is green
Necessary doc changes done or have another open PR
For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.