-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[HUDI-920] Support Incremental query for MOR table #1938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@vinothchandar This PR is functionally working. This is able to replace the existing |
3caed01 to
44f571f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to guard this with a flag. query types are fundamental to design. I prefer not to overload them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This option was supposed to distinguish when the user wants to run an incremental query on the MOR table but on the parquet files only. This was not really necessary because users can just define the timestamp range to achieve the same goal. So we can just use the tableType. If COW, use the old way, if MOR, use the new relation. Don't need an extra option anymore imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not seem to be indented correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice catch :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these be left to the user on how sparkSession/sqlContext is configured instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the tricky part of this PR. We need to force a filter on _hoodie_commit_time, like what we have https://github.com/apache/hudi/blob/master/hudi-spark/src/main/scala/org/apache/hudi/IncrementalRelation.scala#L165.
Currently, I couldn't find an elegant way to force the filter regardless of what query the user is running. Spark optimization sometimes will skip scanning the file and produce incorrect results(e.g. count()). If we add a filter in RDD, then we will get involved in an extra ser/deser step. The elegant way might be getting into the Spark planing, but at this point I have no idea about how to do it. The easiest way is to ask the user to add a .filter() when loading the dataset, but we definitely don't wanna do that :)
My approach here is to force Spark to always scan the Base file and always apply the filter:
- For the Base file, force the pushdown filter and avoid using the default ParquetFileFormat reader(baseFileIterator), which will not scan the file when the user uses
df.count()and will produce incorrect results. - For log file, no need to filter base on commit time.
So we need to make sure the pushdown filter was on in every run if we keep this approach, otherwise the result will be incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, your approach here is to find all the file groups impacted in commit range, then pull affected records from the latest file slice? the mergeOnReadRDD will handle the merging of such an file slice, with the commit filters applied.
let me know @garyli1019 if I am understanding this correctly. it will help me review the code.
in the meantime, can we add a separate flag/option to turn on new.incremental.relation=true or sth, to control this and push the changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct, that's what I was trying to do here.
sure, keep only one incremental query type makes sense to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@garyli1019 one issue we need to make sure we handle is when the file group is pending compaction. when that happens, the base file is not present in the latest slice, but stitched together from the previous slice. @bhasudha is handling this in the Hive PR. worth taking a look and ensuring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can share the implementation of this part for Hive and Spark. I will move this part to a util class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure. This only possible for the base file right? The latest log file will always be on the latest file slice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vinothchandar getting back to this PR... Now I get a little confused, so when we pull from a pending compaction commit, why should we care about the base file that was not in the range? We just read the log file right? From a user perspective, I don't expect to read commit earlier than the commit I defined as a starting point.
9e275c2 to
eb251d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copied and changed this method from #1817 , will apply this change to that PR after merged.
|
Ready for review. cc: @vinothchandar @bhasudha |
|
Found a bug https://issues.apache.org/jira/browse/HUDI-1434 |
Codecov Report
@@ Coverage Diff @@
## master #1938 +/- ##
============================================
+ Coverage 50.28% 50.50% +0.22%
- Complexity 2991 3014 +23
============================================
Files 410 416 +6
Lines 18406 18603 +197
Branches 1885 1909 +24
============================================
+ Hits 9256 9396 +140
- Misses 8392 8438 +46
- Partials 758 769 +11
Flags with carried forward coverage won't be shown. Click here to find out more.
|
|
@garyli1019 any hopes for this PR to be completed in the next day or so? :) |
@vinothchandar The current version should work with the newer version commit file, but I haven't added the option to merge with the parquet file as we discussed in the meeting. Would you do a round of review? |
|
Sounds good. Will do a review, morning time PST. |
seems related? |
hmm, the test was passed in my local. The last time when we were working on the MOR data source, we saw the same thing happened that Travis and local are producing different results. |
518917a to
452be51
Compare
typically, if the test depends on some aspect of the data generated, then it can pass locally and fail in CI, because the random number seed used for data generation is different. Otherwise, it should pass. esp for checking counts etc. I have not seen it being flaky for other reasons. |
vinothchandar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made a high level pass, looks like a good start to me. I would have liked us to skip the data blocks in the logs, if they don't fall in the commit range, rather than do the merge fully and filter at the higher level. The hudi log storage layout has enough intelligence/information for us to push this further down. But this is a good start, from which we can improve upon.
Please fix the smaller changes and once CI is stable/non-flaky. we can land this.
| * @return HashMap<partitionPath, HashMap<fileName, FileStatus>> | ||
| * @throws IOException | ||
| */ | ||
| public static HashMap<String, HashMap<String, FileStatus>> listStatusForAffectedPartitions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we just have this in hudi-spark for now. thats the only module that needs to call this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be shared by other engines later. If we move this to spark we need to switch to scala code, then move it back later when supporting other engines. We could save some effort if we leave it here?
| public static HashMap<String, HashMap<String, FileStatus>> listStatusForAffectedPartitions( | ||
| Path basePath, List<HoodieInstant> commitsToCheck, HoodieTimeline timeline) throws IOException { | ||
| // Extract files touched by these commits. | ||
| // TODO This might need to be done in parallel like listStatus parallelism ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we redo this such that it can use the metadata table for obtaining the listing? You can see how this is done in HoodieParquetInputFormat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to RFC-15 that not being landed yet? The current implementation of HoodieParquetInputFormat is listing all files of affected partitions and then do the filtering later.
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/utils/HoodieInputFormatUtils.java
Outdated
Show resolved
Hide resolved
hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/utils/HoodieInputFormatUtils.java
Outdated
Show resolved
Hide resolved
...rk-datasource/hudi-spark/src/main/scala/org/apache/hudi/MergeOnReadIncrementalRelation.scala
Show resolved
Hide resolved
...rk-datasource/hudi-spark/src/main/scala/org/apache/hudi/MergeOnReadIncrementalRelation.scala
Show resolved
Hide resolved
|
Thanks, will update this PR before Friday night PST. |
|
@vinothchandar addressed comments. You are right, the inconsistency was caused by the test data generator. The CI was happy after I changed from |
|
Thanks @garyli1019 ! You are awesome. |
|
@garyli1019 looks like the spark3 build broke leads to. FYI. I will try to fix this |
|
#2425 will fix the build |
|
Seems like this test is flaky. :( @garyli1019 |
@vinothchandar This was the compaction consistency question I mentioned in the sync meeting. There are actually 2 new insert records suppose to be here, but we triggered inline compaction that compacted 150 records. Also possible that the compaction was triggered earlier. Might be fixed by #2428 |
…pache#2185) Summary: [MINOR] Make sure factory method is used to instanciate DFSPathSelector (apache#2187) * Move createSourceSelector into DFSPathSelector factory method * Replace constructor call with factory method * Added some javadoc [HUDI-1330] handle prefix filtering at directory level (apache#2157) The current DFSPathSelector only ignore prefix(_, .) at the file level while files under subdirectories e.g. (.checkpoint/*) are still considered which result in bad-format exception during reading. [HUDI-1200] fixed NPE in CustomKeyGenerator (apache#2093) - config field is no longer transient in key generator - verified that the key generator object is shipped from the driver to executors, just the one time and reused for each record [HUDI-1209] Properties File must be optional when running deltastreamer (apache#2085) [MINOR] Fix caller to SparkBulkInsertCommitActionExecutor (apache#2195) Fixed calling the wrong constructor [HUDI-1326] Added an API to force publish metrics and flush them. (apache#2152) * [HUDI-1326] Added an API to force publish metrics and flush them. Using the added API, publish metrics after each level of the DAG completed in hudi-test-suite. * Code cleanups Co-authored-by: Vinoth Chandar <[email protected]> [HUDI-1118] Cleanup rollback files residing in .hoodie folder (apache#2205) [MINOR] Private the NoArgsConstructor of SparkMergeHelper and code clean (apache#2194) 1. Fix merge on read DAG to make docker demo pass (apache#2092) 1. Fix merge on read DAG to make docker demo pass (apache#2092) 2. Fix repeat_count, rollback node [HUDI-1274] Make hive synchronization supports hourly partition (apache#2122) [HUDI-1351] Improvements to the hudi test suite for scalability and repeated testing. (apache#2197) 1. Added the --clean-input and --clean-output parameters to clean the input and output directories before starting the job 2. Added the --delete-old-input parameter to deleted older batches for data already ingested. This helps keep number of redundant files low. 3. Added the --input-parallelism parameter to restrict the parallelism when generating input data. This helps keeping the number of generated input files low. 4. Added an option start_offset to Dag Nodes. Without ability to specify start offsets, data is generated into existing partitions. With start offset, DAG can control on which partition, the data is to be written. 5. Fixed generation of records for correct number of partitions - In the existing implementation, the partition is chosen as a random long. This does not guarantee exact number of requested partitions to be created. 6. Changed variable blacklistedFields to be a Set as that is faster than List for membership checks. 7. Fixed integer division for Math.ceil. If two integers are divided, the result is not double unless one of the integer is casted to double. [HUDI-1338] Adding Delete support to test suite framework (apache#2172) - Adding Delete support to test suite. Added DeleteNode Added support to generate delete records Use RateLimiter instead of sleep. Repartition WriteStatus to optimize Hbase index writes (apache#1484) [HUDI-912] Refactor and relocate KeyGenerator to support more engines (apache#2200) * [HUDI-912] Refactor and relocate KeyGenerator to support more engines * Rename KeyGenerators [HUDI-892] RealtimeParquetInputFormat skip adding projection columns if there are no log files (apache#2190) * [HUDI-892] RealtimeParquetInputFormat skip adding projection columns if there are no log files * [HUDI-892] for test * [HUDI-892] fix bug generate array from split * [HUDI-892] revert test log [HUDI-1352] Add FileSystemView APIs to query pending clustering operations (apache#2202) [HUDI-1375] Fix bug in HoodieAvroUtils.removeMetadataFields() method (apache#2232) Co-authored-by: Wenning Ding <[email protected]> [HUDI-1358] Fix Memory Leak in HoodieLogFormatWriter (apache#2217) [HUDI-1377] remove duplicate code (apache#2235) [HUDI-1327] Introduce base implemetation of hudi-flink-client (apache#2176) [HUDI-1400] Replace Operation enum with WriteOperationType (apache#2259) [HUDI-1384] Decoupling hive jdbc dependency when HIVE_USE_JDBC_OPT_KEY set false (apache#2241) [MINOR] clean up and add comments to flink client (apache#2261) [MINOR] Add apacheflink label (apache#2268) [HUDI-1393] Add compaction action in archive command (apache#2246) [HUDI-1364] Add HoodieJavaEngineContext to hudi-java-client (apache#2222) [HUDI-1396] Fix for preventing bootstrap datasource jobs from hanging via spark-submit (apache#2253) Co-authored-by: Wenning Ding <[email protected]> [HUDI-1358] Fix leaks in DiskBasedMap and LazyFileIterable (apache#2249) [HUDI-1392] lose partition info when using spark parameter basePath (apache#2243) Co-authored-by: zhang wen <[email protected]> [MINOR] refactor code in HoodieMergeHandle (apache#2272) [HUDI-1424] Write Type changed to BULK_INSERT when set ENABLE_ROW_WRITER_OPT_KEY=true (apache#2289) [HUDI-1373] Add Support for OpenJ9 JVM (apache#2231) * add supoort for OpenJ9 VM * add 32bit openJ9 * Pulled the memory layout specs into their own classes. [HUDI-1357] Added a check to validate records are not lost during merges. (apache#2216) - Turned off by default [HUDI-1196] Update HoodieKey when deduplicating records with global index (apache#2248) - Works only for overwrite payload (default) - Does not alter current semantics otherwise Co-authored-by: Ryan Pifer <[email protected]> [HUDI-1349] spark sql support overwrite use insert_overwrite_table (apache#2196) [HUDI-1343] Add standard schema postprocessor which would rewrite the schema using spark-avro conversion (apache#2192) Co-authored-by: liujh <[email protected]> [HUDI-1427] Fix FileAlreadyExistsException when set HOODIE_AUTO_COMMIT_PROP to true (apache#2295) [HUDI-1412] Make HoodieWriteConfig support setting different default … (apache#2278) * [HUDI-1412] Make HoodieWriteConfig support setting different default value according to engine type fix typo (apache#2308) Co-authored-by: Xi Chen <[email protected]> [HUDI-1040] Make Hudi support Spark 3 (apache#2208) * Fix flaky MOR unit test * Update Spark APIs to make it be compatible with both spark2 & spark3 * Refactor bulk insert v2 part to make Hudi be able to compile with Spark3 * Add spark3 profile to handle fasterxml & spark version * Create hudi-spark-common module & refactor hudi-spark related modules Co-authored-by: Wenning Ding <[email protected]> [MINOR] Throw an exception when keyGenerator initialization failed (apache#2307) [HUDI-1395] Fix partition path using FSUtils (apache#2312) Fixed the logic to get partition path in Copier and Exporter utilities. [HUDI-1445] Refactor AbstractHoodieLogRecordScanner to use Builder (apache#2313) [MINOR] Minor improve in IncrementalRelation (apache#2314) [HUDI-1439] Remove scala dependency from hudi-client-common (apache#2306) [HUDI-1428] Clean old fileslice is invalid (apache#2292) Co-authored-by: zhang wen <[email protected]> Co-authored-by: zhang wen <[email protected]> [HUDI-1448] Hudi dla sync support skip rt table syncing (apache#2324) [HUDI-1435] Fix bug in Marker File Reconciliation for Non-Partitioned datasets (apache#2301) [MINOR] Improve code readability by passing in the fileComparisonsRDD in bloom index (apache#2319) [HUDI-1376] Drop Hudi metadata cols at the beginning of Spark datasource writing (apache#2233) Co-authored-by: Wenning Ding <[email protected]> [MINOR] Fix error information in exception (apache#2341) [MINOR] Make QuickstartUtil generate random timestamp instead of 0 (apache#2340) [HUDI-1406] Add date partition based source input selector for Delta streamer (apache#2264) - Adds ability to list only recent date based partitions from source data. - Parallelizes listing for faster tailing of DFSSources [HUDI-1437] support more accurate spark JobGroup for better performance tracking (apache#2322) [HUDI-1470] Use the latest writer schema, when reading from existing parquet files in the hudi-test-suite (apache#2344) [HUDI-115] Adding DefaultHoodieRecordPayload to honor ordering with combineAndGetUpdateValue (apache#2311) * Added ability to pass in `properties` to payload methods, so they can perform table/record specific merges * Added default methods so existing payload classes are backwards compatible. * Adding DefaultHoodiePayload to honor ordering while merging two records * Fixing default payload based on feedback [HUDI-1419] Add base implementation for hudi java client (apache#2286) [MINOR] Pass root exception to HoodieKeyGeneratorException for more information (apache#2354) Co-authored-by: Xi Chen <[email protected]> [HUDI-1075] Implement simple clustering strategies to create ClusteringPlan and to run the plan [HUDI-1471] Make QuickStartUtils generate deletes according to specific ts (apache#2357) [HUDI-1485] Fix Deletes issued without any prior commits exception (apache#2361) [HUDI-1488] Fix Test Case Failure in TestHBaseIndex (apache#2365) [HUDI-1489] Fix null pointer exception when reading updated written bootstrap table (apache#2370) Co-authored-by: Wenning Ding <[email protected]> [HUDI-1451] Support bulk insert v2 with Spark 3.0.0 (apache#2328) Co-authored-by: Wenning Ding <[email protected]> - Added support for bulk insert v2 with datasource v2 api in Spark 3.0.0. [HUDI-1487] fix unit test testCopyOnWriteStorage random failed (apache#2364) [HUDI-1490] Incremental Query should work even when there are partitions that have no incremental changes (apache#2371) * Incremental Query should work even when there are partitions that have no incremental changes Co-authored-by: Sivabalan Narayanan <[email protected]> [HUDI-1331] Adding support for validating entire dataset and long running tests in test suite framework (apache#2168) * trigger rebuild * [HUDI-1156] Remove unused dependencies from HoodieDeltaStreamerWrapper Class (apache#1927) * Adding support for validating records and long running tests in test sutie framework * Adding partial validate node * Fixing spark session initiation in Validate nodes * Fixing validation * Adding hive table validation to ValidateDatasetNode * Rebasing with latest commits from master * Addressing feedback * Addressing comments Co-authored-by: lamber-ken <[email protected]> Co-authored-by: linshan-ma <[email protected]> [HUDI-1481] add structured streaming and delta streamer clustering unit test (apache#2360) [HUDI-1354] Block updates and replace on file groups in clustering (apache#2275) * [HUDI-1354] Block updates and replace on file groups in clustering * [HUDI-1354] Block updates and replace on file groups in clustering [HUDI-1350] Support Partition level delete API in HUDI (apache#2254) * [HUDI-1350] Support Partition level delete API in HUDI * [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction * [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction [HUDI-1495] Upgrade Flink version to 1.12.0 (apache#2384) [MINOR] Remove the duplicate code in AbstractHoodieWriteClient.startCommit (apache#2385) [HUDI-1398] Align insert file size for reducing IO (apache#2256) * [HUDI-1398] Align insert file size for reducing IO Co-authored-by: zhang wen <[email protected]> [HUDI-1484] Escape the partition value in HiveSyncTool (apache#2363) [HUDI-1474] Add additional unit tests to TestHBaseIndex (apache#2349) [HUDI-1147] Modify GenericRecordFullPayloadGenerator to generate vali… (apache#2045) * [HUDI-1147] Modify GenericRecordFullPayloadGenerator to generate valid timestamps Co-authored-by: Sivabalan Narayanan <[email protected]> [HUDI-1493] Fixed schema compatibility check for fields. (apache#2350) Some field types changes are allowed (e.g. int -> long) while maintaining schema backward compatibility within HUDI. The check was reversed with the reader schema being passed for the write schema. [MINOR] Update report_coverage.sh (apache#2396) [HUDI-1434] fix incorrect log file path in HoodieWriteStat (apache#2300) * [HUDI-1434] fix incorrect log file path in HoodieWriteStat * HoodieWriteHandle#close() returns a list of WriteStatus objs * Handle rolled-over log files and return a WriteStatus per log file written - Combined data and delete block logging into a single call - Lazily initialize and manage write status based on returned AppendResult - Use FSUtils.getFileSize() to set final file size, consistent with other handles - Added tests around returned values in AppendResult - Added validation of the file sizes returned in write stat Co-authored-by: Vinoth Chandar <[email protected]> [HUDI-1418] Set up flink client unit test infra (apache#2281) [MINOR] Sync UpsertPartitioner modify of HUDI-1398 to flink/java (apache#2390) Co-authored-by: zhang wen <[email protected]> [HUDI-1423] Support delete in hudi-java-client (apache#2353) [MINOR] Add maven profile to support skipping shade sources jars (apache#2358) Co-authored-by: Xi Chen <[email protected]> [HUDI-842] Implementation of HUDI RFC-15. - Introduced an internal metadata table, that stores file listings. - metadata table is kept upto date with - Fixed handling of CleanerPlan. - [HUDI-842] Reduce parallelism to speed up the test. - [HUDI-842] Implementation of CLI commands for metadata operations and lookups. - [HUDI-842] Extend rollback metadata to include the files which have been appended to. - [HUDI-842] Support for rollbacks in MOR Table. - MarkerBasedRollbackStrategy needs to correctly provide the list of files for which rollback blocks were appended. - [HUDI-842] Added unit test for rollback of partial commits (inflight but not completed yet). - [HUDI-842] Handled the error case where metadata update succeeds but dataset commit fails. - [HUDI-842] Schema evolution strategy for Metadata Table. Each type of metadata saved (FilesystemMetadata, ColumnIndexMetadata, etc.) will be a separate field with default null. The type of the record will identify the valid field. This way, we can grow the schema when new type of information is saved within in which still keeping it backward compatible. - [HUDI-842] Fix non-partitioned case and speedup initial creation of metadata table.Choose only 1 partition for jsc as the number of records is low (hundreds to thousands). There is more overhead of creating large number of partitions for JavaRDD and it slows down operations like WorkloadProfile. For the non-partitioned case, use "." as the name of the partition to prevent empty keys in HFile. - [HUDI-842] Reworked metrics pusblishing. - Code has been split into reader and writer side. HoodieMetadata code to be accessed by using HoodieTable.metadata() to get instance of metdata for the table. Code is serializable to allow executors to use the functionality. - [RFC-15] Add metrics to track the time for each file system call. - [RFC-15] Added a distributed metrics registry for spark which can be used to collect metrics from executors. This helps create a stats dashboard which shows the metadata table improvements in real-time for production tables. - [HUDI-1321] Created HoodieMetadataConfig to specify configuration for the metadata table. This is safer than full-fledged properties for the metadata table (like HoodieWriteConfig) as it makes burdensome to tune the metadata. With limited configuration, we can control the performance of the metadata table closely. [HUDI-1319][RFC-15] Adding interfaces for HoodieMetadata, HoodieMetadataWriter (apache#2266) - moved MetadataReader to HoodieBackedTableMetadata, under the HoodieTableMetadata interface - moved MetadataWriter to HoodieBackedTableMetadataWriter, under the HoodieTableMetadataWriter - Pulled all the metrics into HoodieMetadataMetrics - Writer now wraps the metadata, instead of extending it - New enum for MetadataPartitionType - Streamlined code flow inside HoodieBackedTableMetadataWriter w.r.t initializing metadata state - [HUDI-1319] Make async operations work with metadata table (apache#2332) - Changes the syncing model to only move over completed instants on data timeline - Syncing happens postCommit and on writeClient initialization - Latest delta commit on the metadata table is sufficient as the watermark for data timeline archival - Cleaning/Compaction use a suffix to the last instant written to metadata table, such that we keep the 1-1 - .. mapping between data and metadata timelines. - Got rid of a lot of the complexity around checking for valid commits during open of base/log files - Tests now use local FS, to simulate more failure scenarios - Some failure scenarios exposed HUDI-1434, which is needed for MOR to work correctly co-authored by: Vinoth Chandar <[email protected]> [HUDI-1450] Use metadata table for listing in HoodieROTablePathFilter (apache#2326) [HUDI-1394] [RFC-15] Use metadata table (if present) to get all partition paths (apache#2351) [HUDI-1469] Faster initialization of metadata table using parallelized listing. (apache#2343) * [HUDI-1469] Faster initialization of metadata table using parallelized listing which finds partitions and files in a single scan. * MINOR fixes Co-authored-by: Vinoth Chandar <[email protected]> [HUDI-1325] [RFC-15] Merge updates of unsynced instants to metadata table (apache#2342) [RFC-15] Fix partition key in metadata table when bootstrapping from file system (apache#2387) Co-authored-by: Ryan Pifer <[email protected]> [HUDI-1312] [RFC-15] Support for metadata listing for snapshot queries through Hive/SparkSQL (apache#2366) Co-authored-by: Ryan Pifer <[email protected]> [HUDI-1504] Allow log files generated during restore/rollback to be synced as well - TestHoodieBackedMetadata#testSync etc now run for MOR tables - HUDI-1502 is still pending and has issues for MOR/rollbacks - Also addressed bunch of code review comments. [HUDI-1498] Read clustering plan from requested file for inflight instant (apache#2389) [HUDI-1506] Fix wrong exception thrown in HoodieAvroUtils (apache#2405) [HUDI-1383] Fixing sorting of partition vals for hive sync computation (apache#2402) [HUDI-1507] Change timeline utils to support reading replacecommit metadata (apache#2407) [MINOR] Rename unit test package of hudi-spark3 from scala to java (apache#2411) [HUDI-1513] Introduce WriteClient#preWrite() and relocate metadata table syncing (apache#2413) - Syncing to metadata table, setting operation type, starting async cleaner done in preWrite() - Fixes an issues where delete() was not starting async cleaner correctly - Fixed tests and enabled metadata table for TestAsyncCompaction [HUDI-1510] Move HoodieEngineContext and its dependencies to hudi-common (apache#2410) [MINOR] Sync HUDI-1196 to FlinkWriteHelper (apache#2415) [HUDI-1514] Avoid raw type use for parameter of Transformer interface (apache#2420) [HUDI-920] Support Incremental query for MOR table (apache#1938) [HUDI-1276] [HUDI-1459] Make Clustering/ReplaceCommit and Metadata table be compatible (apache#2422) * [HUDI-1276] [HUDI-1459] Make Clustering/ReplaceCommit and Metadata table be compatible * Use filesystemview and json format from metadata. Add tests Co-authored-by: Satish Kotha <[email protected]> [HUDI-1399] support a independent clustering spark job to asynchronously clustering (apache#2379) * [HUDI-1481] add structured streaming and delta streamer clustering unit test * [HUDI-1399] support a independent clustering spark job to asynchronously clustering * [HUDI-1399] support a independent clustering spark job to asynchronously clustering * [HUDI-1498] Read clustering plan from requested file for inflight instant (apache#2389) * [HUDI-1399] support a independent clustering spark job with schedule generate instant time Co-authored-by: satishkotha <[email protected]> [MINOR] fix spark 3 build for incremental query on MOR (apache#2425) [HUDI-1479] Use HoodieEngineContext to parallelize fetching of partiton paths (apache#2417) * [HUDI-1479] Use HoodieEngineContext to parallelize fetching of partition paths * Adding testClass for FileSystemBackedTableMetadata Co-authored-by: Nishith Agarwal <[email protected]> [HUDI-1520] add configure for spark sql overwrite use INSERT_OVERWRITE_TABLE (apache#2428) [HUDI-1502] MOR rollback and restore support for metadata sync (apache#2421) - Adds field to RollbackMetadata that capture the logs written for rollback blocks - Adds field to RollbackMetadata that capture new logs files written by unsynced deltacommits Co-authored-by: Vinoth Chandar <[email protected]> Reviewers: O955 Project Hoodie Project Reviewer: Add blocking reviewers!, PHID-PROJ-pxfpotkfgkanblb3detq!, #ldap_hudi, modi Reviewed By: #ldap_hudi, modi Differential Revision: https://code.uberinternal.com/D5347141
What is the purpose of the pull request
Support incremental query for MOR table
https://issues.apache.org/jira/browse/HUDI-920
Brief change log
Verify this pull request
This change added tests and can be verified as follows:
Committer checklist
Has a corresponding JIRA in PR title & commit
Commit message is descriptive of the change
CI is green
Necessary doc changes done or have another open PR
For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.