Skip to content

Conversation

@wxplovecc
Copy link
Contributor

@wxplovecc wxplovecc commented Mar 11, 2022

Tips

What is the purpose of the pull request

This pull request avoid deduplicateRecords method in FlinkWriteHelper run out of order

Brief change log

(for example:)

  • Modify AnnotationLocation checkstyle rule in checkstyle.xml

Verify this pull request

(Please pick either of the following options)

This pull request is a trivial rework / code cleanup without any test coverage.

(or)

This pull request is already covered by existing tests, such as (please describe tests).

(or)

This change added tests and can be verified as follows:

(example:)

  • Added integration tests for end-to-end.
  • Added HoodieClientWriteTest to verify the change.
  • Manually verified the change by running a job locally.

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

if (hasInsert) {
recordList.get(0).getCurrentLocation().setInstantTime("I");
}
return recordList;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In line 114, we already reset the location, so each records list under the same key after reduction should have the same instant time type as before, so why the set is needed ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote a test in local and found out the order of the list was changed after reduction. <id1, id2> became <id2,id1> somehow, so it's not related to a single record.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the Map::values does not guarantee the sequence, state index based writer has no problem because it assigns the instant "I" and "U" based on the buckets of last checkpoint, and reuse the buckets within one checkpoint.

This fix is necessary for it to be more robust.

@hudi-bot
Copy link
Collaborator

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@garyli1019 garyli1019 self-assigned this Mar 14, 2022
@nsivabalan nsivabalan added engine:flink Flink integration priority:high Significant impact; potential bugs labels Mar 15, 2022
Copy link
Member

@garyli1019 garyli1019 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wxplovecc thanks for your contribution! I can reproduce this bug. left some minor comments. we should merge this before the next release

@Override
public List<HoodieRecord<T>> deduplicateRecords(
List<HoodieRecord<T>> records, HoodieIndex<?, ?> index, int parallelism) {
final boolean hasInsert = records.get(0).getCurrentLocation().getInstantTime().equals("I");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about renaming this as isInsertBucket and add a comment to explain why we need this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The keyedRecords can be made more efficient:

Map<Object, List<HoodieRecord<T>>> keyedRecords = records.stream()
        .collect(Collectors.groupingBy(record -> record.getKey().getRecordKey()))

JobClient client = execEnv.executeAsync(execEnv.getStreamGraph());
if (client.getJobStatus().get() != JobStatus.FAILED) {
try {
TimeUnit.SECONDS.sleep(20); // wait long enough for the compaction to finish
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this sleep still needed if we test for COW?


@ParameterizedTest
@ValueSource(strings = {"BUCKET"})
public void testCopyOnWriteBucketIndex(String indexType) throws Exception {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use this test for the COW table? include state index as well

@garyli1019 garyli1019 added priority:blocker Production down; release blocker and removed priority:high Significant impact; potential bugs labels Mar 20, 2022
@danny0405 danny0405 closed this in 26e5d2e Mar 21, 2022
vingov pushed a commit to vingov/hudi that referenced this pull request Apr 3, 2022
…eption

Actually method FlinkWriteHelper#deduplicateRecords does not guarantee the records sequence, but there is a
implicit constraint: all the records in one bucket should have the same bucket type(instant time here),
the BucketStreamWriteFunction breaks the rule and fails to comply with this constraint.

close apache#5018
stayrascal pushed a commit to stayrascal/hudi that referenced this pull request Apr 12, 2022
…eption

Actually method FlinkWriteHelper#deduplicateRecords does not guarantee the records sequence, but there is a
implicit constraint: all the records in one bucket should have the same bucket type(instant time here),
the BucketStreamWriteFunction breaks the rule and fails to comply with this constraint.

close apache#5018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

engine:flink Flink integration priority:blocker Production down; release blocker

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants