-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Fix comment in RFC46 #6745
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix comment in RFC46 #6745
Changes from all commits
3e69e2d
c24841a
94c41aa
c1eb13d
1b777c1
12cad87
cd5d5dc
34d2e0c
4f6b499
fd534de
4484c3b
5e7e1fd
569b1c0
f816aa2
61191c4
e5807db
3eca298
162ed89
e915893
5ef621d
ad21b3a
a2fdb62
d99df9d
58c2a12
4c78db4
18cdfcb
466535c
102ba04
5fc3060
a598907
0650eb4
f2e5741
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -93,11 +93,11 @@ public HoodieConcatHandle(HoodieWriteConfig config, String instantTime, HoodieTa | |
| */ | ||
| @Override | ||
| public void write(HoodieRecord oldRecord) { | ||
| String key = oldRecord.getRecordKey(keyGeneratorOpt); | ||
| Schema schema = useWriterSchemaForCompaction ? tableSchemaWithMetaFields : tableSchema; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why does this condition change?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @wzx140 would need to clarify this one
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In previous, it is oldRecord must be read from table. So its schema is newRecord from keyToNewRecords must be incomming or compaction. So its schema is |
||
| Schema oldSchema = config.populateMetaFields() ? tableSchemaWithMetaFields : tableSchema; | ||
| String key = oldRecord.getRecordKey(oldSchema, keyGeneratorOpt); | ||
| try { | ||
| // NOTE: We're enforcing preservation of the record metadata to keep existing semantic | ||
| writeToFile(new HoodieKey(key, partitionPath), oldRecord, schema, config.getPayloadConfig().getProps(), true); | ||
| writeToFile(new HoodieKey(key, partitionPath), oldRecord, oldSchema, config.getPayloadConfig().getProps(), true); | ||
| } catch (IOException | RuntimeException e) { | ||
| String errMsg = String.format("Failed to write old record into new file for key %s from old file %s to new file %s with writerSchema %s", | ||
| key, getOldFilePath(), newFilePath, writeSchemaWithMetaFields.toString(true)); | ||
|
|
@@ -109,14 +109,15 @@ public void write(HoodieRecord oldRecord) { | |
|
|
||
| @Override | ||
| protected void writeIncomingRecords() throws IOException { | ||
| Schema schema = useWriterSchemaForCompaction ? tableSchemaWithMetaFields : tableSchema; | ||
| while (recordItr.hasNext()) { | ||
| HoodieRecord<T> record = recordItr.next(); | ||
| if (needsUpdateLocation()) { | ||
| record.unseal(); | ||
| record.setNewLocation(new HoodieRecordLocation(instantTime, fileId)); | ||
| record.seal(); | ||
| } | ||
| writeInsertRecord(record); | ||
| writeInsertRecord(record, schema); | ||
| } | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to update meta values if we're not populating them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if config.populateMetaFields=false, then metadataValues is empty. And hoodieRecord.updateMetadataValues will do nothing.