Skip to content

Conversation

@danny0405
Copy link
Contributor

@danny0405 danny0405 commented Dec 10, 2020

What is the purpose of the pull request

The option 'hoodie.bloom.index.prune.by.ranges' is default true, so it
expects to be an improvement.

Brief change log

  • Some refactoring to SparkHoodieBloomIndex

Verify this pull request

Existing tests

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

Copy link
Member

@garyli1019 garyli1019 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danny0405 Thanks for your contribution, good catch! The first compute probably being cached, but this change definitely looks better.
Waiting for another reviewer to double-check. @nsivabalan mind taking a look?

Copy link
Contributor

@yanghua yanghua left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 too, just left one minor comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can move fileGroupToComparisons into this line or break the hoodieTable to the next line. It would be better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, agree ~

Copy link
Member

@vinothchandar vinothchandar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @danny0405 !

This leaves the code easier to read. but unsure of the intended effects. I have actually thought more about how to avoid this. and it seems hard. we do need some evaluation of how much pruning is going to happen for us to size the parallelism and since that is very workload specific, it does need an evaluation.

best idea I could come up with, was to see if we can do some sampling say 10% of the records to estimate.

Something like below.

/ we will just try exploding the input and then count to determine comparisons
      // FIX(vc): Only do sampling here and extrapolate?
      fileToComparisons = explodeRecordRDDWithFileComparisons(partitionToFileInfo, partitionRecordKeyPairRDD.sample(0.1))
          .mapToPair(t -> t).countByKey();

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: wondering how checkstyle is happy with the indentation here. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, i just notice that Hoodie use the whitespace for indentation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you actually see from the Spark UI that its not computed twice? I ask because, fileComparisonsRDD is not cached and thus even though this is declared only once, during runtime, Spark will lazily recompute fileComparisonsRDD once for each method that uses it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check the Spark UI yet, just a simple analyze the process of data writing. For each batch of records to write, the SparkHoodieBloomIndex.lookupIndex was expected to be invoked once so the fileComparisonsRDD should only be evaluated only once, is there other invocation for SparkHoodieBloomIndex.lookupIndex ? Maybe i missed something.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Spark does lazy evaluation of an RDD. If the RDD is not persisted to disk/cache, it simply recomputes it. In this case, fileComparisonsRDD would be recomputed twice during runtime. the method explodeRecordRDDWithFileComparisons() will only be called once, but it does nothing in practice except "define" the RDD it returns.

In contrast, if you notice this line of code at the start of tagLocation

// Step 0: cache the input record RDD
    if (config.getBloomIndexUseCaching()) {
      recordRDD.persist(SparkMemoryUtils.getBloomIndexInputStorageLevel(config.getProps()));
    }

This caches the incoming recordRDD and thus however many times this RDD is used in the indexing DAG, it will not go to the previous stage. if we did not have the .persist() in here, then everytime an RDD derived off this recordRDD is needed for a Spark action, it will keep reading from source and compute the recordRDD again.

Apologies, if you knew all this already. :) and I am failing to see how this wont happen. but at-least you get my concern with this explanation.

Copy link
Contributor Author

@danny0405 danny0405 Dec 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation, the recordRDD may be persisted, but the computation in explodeRecordRDDWithFileComparisons still need to do 2 times, right ? Sorry, i'm not that familiar with the RDD thing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. That's what i think will happen

@danny0405
Copy link
Contributor Author

Thanks for the PR @danny0405 !

This leaves the code easier to read. but unsure of the intended effects. I have actually thought more about how to avoid this. and it seems hard. we do need some evaluation of how much pruning is going to happen for us to size the parallelism and since that is very workload specific, it does need an evaluation.

best idea I could come up with, was to see if we can do some sampling say 10% of the records to estimate.

Something like below.

/ we will just try exploding the input and then count to determine comparisons
      // FIX(vc): Only do sampling here and extrapolate?
      fileToComparisons = explodeRecordRDDWithFileComparisons(partitionToFileInfo, partitionRecordKeyPairRDD.sample(0.1))
          .mapToPair(t -> t).countByKey();

Thanks so much for the review @vinothchandar ~ Can we really do a sampling here ? Then how can we ensure the FileId -> HoodieKey candidates are complete so that the final built Index is accurate. (e.g. No UPDATE records are tagged as INSERT).

…bloom.index.prune.by.ranges' is set to true

The option 'hoodie.bloom.index.prune.by.ranges' is default true, so it
expects to be an improvement.
@codecov-io
Copy link

codecov-io commented Dec 11, 2020

Codecov Report

Merging #2319 (622c3c7) into master (bd9ccec) will decrease coverage by 41.88%.
The diff coverage is n/a.

Impacted file tree graph

@@              Coverage Diff              @@
##             master    #2319       +/-   ##
=============================================
- Coverage     52.29%   10.40%   -41.89%     
+ Complexity     2630       48     -2582     
=============================================
  Files           329       51      -278     
  Lines         14743     1787    -12956     
  Branches       1484      213     -1271     
=============================================
- Hits           7710      186     -7524     
+ Misses         6419     1588     -4831     
+ Partials        614       13      -601     
Flag Coverage Δ Complexity Δ
hudicli ? ?
hudiclient ? ?
hudicommon ? ?
hudihadoopmr ? ?
huditimelineservice ? ?
hudiutilities 10.40% <ø> (-59.71%) 0.00 <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ Complexity Δ
...va/org/apache/hudi/utilities/IdentitySplitter.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-2.00%)
...va/org/apache/hudi/utilities/schema/SchemaSet.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-3.00%)
...a/org/apache/hudi/utilities/sources/RowSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-4.00%)
.../org/apache/hudi/utilities/sources/AvroSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-1.00%)
.../org/apache/hudi/utilities/sources/JsonSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-1.00%)
...rg/apache/hudi/utilities/sources/CsvDFSSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-10.00%)
...g/apache/hudi/utilities/sources/JsonDFSSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-4.00%)
...apache/hudi/utilities/sources/JsonKafkaSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-6.00%)
...pache/hudi/utilities/sources/ParquetDFSSource.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-5.00%)
...lities/schema/SchemaProviderWithPostProcessor.java 0.00% <0.00%> (-100.00%) 0.00% <0.00%> (-4.00%)
... and 306 more

@vinothchandar
Copy link
Member

Can we really do a sampling here ?

yes, there is a trade-off here. We have to rely on the fact that the previous stage does a reduceByKey so the data would be distributed evenly across the RDD partitions :). So, extrapolation could be okay, for purposes of computeComparisonsPerFileGroup(). The other usage, where we actually do the range pruning, we cannot do sampling.

@danny0405
Copy link
Contributor Author

The other usage, where we actually do the range pruning, we cannot do sampling

Thanks, got your idea ~

@vinothchandar
Copy link
Member

@danny0405 I ll take another pass and merge this PR. This should not have any side effects so should qualify for a MINOR label.

IIRC there are some JIRAs with the index/performance component around some followup ideas we discussed here

@nsivabalan
Copy link
Contributor

yeah, same was already brought up before and we didn't proceed since it needed some perf analysis.
this was the rational:
even though explodeRecordRDDWithFileComparisons(...) is called in two places, in the first place we just do count by key which may not shuffle any actual data, where as the 2nd call could incur shuffling. Hence the pattern of usage differs. More info can be found in the attached PR.

@vinothchandar vinothchandar self-assigned this Dec 15, 2020
@vinothchandar
Copy link
Member

Merging.

@vinothchandar vinothchandar changed the title [MINOR] Improve to only compute fileComparisonsRDD once when 'hoodie.… [MINOR] Improve code readability by passing in the fileComparisonsRDD in bloom index Dec 15, 2020
@vinothchandar vinothchandar merged commit 93d9c25 into apache:master Dec 15, 2020
prashantwason pushed a commit to prashantwason/incubator-hudi that referenced this pull request Feb 22, 2021
prashantwason pushed a commit to prashantwason/incubator-hudi that referenced this pull request Feb 22, 2021
…pache#2185)

Summary:
[MINOR] Make sure factory method is used to instanciate DFSPathSelector (apache#2187)

* Move createSourceSelector into DFSPathSelector factory method
* Replace constructor call with factory method
* Added some javadoc

[HUDI-1330] handle prefix filtering at directory level (apache#2157)

The current DFSPathSelector only ignore prefix(_, .) at the file level while files under subdirectories
e.g. (.checkpoint/*) are still considered which result in bad-format exception during reading.

[HUDI-1200] fixed NPE in CustomKeyGenerator (apache#2093)

- config field is no longer transient in key generator
- verified that the key generator object is shipped from the driver to executors, just the one time and reused for each record

[HUDI-1209] Properties File must be optional when running deltastreamer (apache#2085)

[MINOR] Fix caller to SparkBulkInsertCommitActionExecutor (apache#2195)

Fixed calling the wrong constructor

[HUDI-1326] Added an API to force publish metrics and flush them. (apache#2152)

* [HUDI-1326] Added an API to force publish metrics and flush them.

Using the added API, publish metrics after each level of the DAG completed in hudi-test-suite.

* Code cleanups

Co-authored-by: Vinoth Chandar <[email protected]>

[HUDI-1118] Cleanup rollback files residing in .hoodie folder (apache#2205)

[MINOR] Private the NoArgsConstructor of SparkMergeHelper and code clean (apache#2194)

1. Fix merge on read DAG to make docker demo pass (apache#2092)

1. Fix merge on read DAG to make docker demo pass (apache#2092)
2. Fix repeat_count, rollback node

[HUDI-1274] Make hive synchronization supports hourly partition (apache#2122)

[HUDI-1351] Improvements to the hudi test suite for scalability and repeated testing. (apache#2197)

1. Added the --clean-input and --clean-output parameters to clean the input and output directories before starting the job
2. Added the --delete-old-input parameter to deleted older batches for data already ingested. This helps keep number of redundant files low.
3. Added the --input-parallelism parameter to restrict the parallelism when generating input data. This helps keeping the number of generated input files low.
4. Added an option start_offset to Dag Nodes. Without ability to specify start offsets, data is generated into existing partitions. With start offset, DAG can control on which partition, the data is to be written.
5. Fixed generation of records for correct number of partitions
  - In the existing implementation, the partition is chosen as a random long. This does not guarantee exact number of requested partitions to be created.
6. Changed variable blacklistedFields to be a Set as that is faster than List for membership checks.
7. Fixed integer division for Math.ceil. If two integers are divided, the result is not double unless one of the integer is casted to double.

[HUDI-1338] Adding Delete support to test suite framework (apache#2172)

- Adding Delete support to test suite.
         Added DeleteNode
         Added support to generate delete records

Use RateLimiter instead of sleep. Repartition WriteStatus to optimize Hbase index writes (apache#1484)

[HUDI-912] Refactor and relocate KeyGenerator to support more engines (apache#2200)

* [HUDI-912] Refactor and relocate KeyGenerator to support more engines

* Rename KeyGenerators

[HUDI-892] RealtimeParquetInputFormat skip adding projection columns if there are no log files (apache#2190)

* [HUDI-892] RealtimeParquetInputFormat skip adding projection columns if there are no log files
* [HUDI-892]  for test
* [HUDI-892]  fix bug generate array from split
* [HUDI-892] revert test log

[HUDI-1352] Add FileSystemView APIs to query pending clustering operations (apache#2202)

[HUDI-1375] Fix bug in HoodieAvroUtils.removeMetadataFields() method (apache#2232)

Co-authored-by: Wenning Ding <[email protected]>

[HUDI-1358] Fix Memory Leak in HoodieLogFormatWriter (apache#2217)

[HUDI-1377] remove duplicate code (apache#2235)

[HUDI-1327] Introduce base implemetation of hudi-flink-client (apache#2176)

[HUDI-1400] Replace Operation enum with WriteOperationType (apache#2259)

[HUDI-1384] Decoupling hive jdbc dependency when HIVE_USE_JDBC_OPT_KEY set false (apache#2241)

[MINOR] clean up and add comments to flink client (apache#2261)

[MINOR] Add apacheflink label (apache#2268)

[HUDI-1393] Add compaction action in archive command (apache#2246)

[HUDI-1364] Add HoodieJavaEngineContext to hudi-java-client (apache#2222)

[HUDI-1396] Fix for preventing bootstrap datasource jobs from hanging via spark-submit (apache#2253)

Co-authored-by: Wenning Ding <[email protected]>

[HUDI-1358] Fix leaks in DiskBasedMap and LazyFileIterable (apache#2249)

[HUDI-1392] lose partition info when using spark parameter basePath (apache#2243)

Co-authored-by: zhang wen <[email protected]>

[MINOR] refactor code in HoodieMergeHandle (apache#2272)

[HUDI-1424]  Write Type changed to BULK_INSERT when set ENABLE_ROW_WRITER_OPT_KEY=true (apache#2289)

[HUDI-1373] Add Support for OpenJ9 JVM (apache#2231)

* add supoort for OpenJ9 VM
* add 32bit openJ9
* Pulled the memory layout specs into their own classes.

[HUDI-1357] Added a check to validate records are not lost during merges. (apache#2216)

- Turned off by default

[HUDI-1196] Update HoodieKey when deduplicating records with global index (apache#2248)

- Works only for overwrite payload (default)
- Does not alter current semantics otherwise

Co-authored-by: Ryan Pifer <[email protected]>

[HUDI-1349] spark sql support overwrite use insert_overwrite_table (apache#2196)

[HUDI-1343] Add standard schema postprocessor which would rewrite the schema using spark-avro conversion (apache#2192)

Co-authored-by: liujh <[email protected]>

[HUDI-1427] Fix FileAlreadyExistsException when set HOODIE_AUTO_COMMIT_PROP to true (apache#2295)

[HUDI-1412] Make HoodieWriteConfig support setting different default … (apache#2278)

* [HUDI-1412] Make HoodieWriteConfig support setting different default value according to engine type

fix typo (apache#2308)

Co-authored-by: Xi Chen <[email protected]>

[HUDI-1040] Make Hudi support Spark 3 (apache#2208)

* Fix flaky MOR unit test

* Update Spark APIs to make it be compatible with both spark2 & spark3

* Refactor bulk insert v2 part to make Hudi be able to compile with Spark3

* Add spark3 profile to handle fasterxml & spark version

* Create hudi-spark-common module & refactor hudi-spark related modules

Co-authored-by: Wenning Ding <[email protected]>

[MINOR] Throw an exception when keyGenerator initialization failed (apache#2307)

[HUDI-1395] Fix partition path using FSUtils (apache#2312)

Fixed the logic to get partition path in Copier and Exporter utilities.

[HUDI-1445] Refactor AbstractHoodieLogRecordScanner to use Builder (apache#2313)

[MINOR] Minor improve in IncrementalRelation (apache#2314)

[HUDI-1439] Remove scala dependency from hudi-client-common (apache#2306)

[HUDI-1428] Clean old fileslice is invalid (apache#2292)

Co-authored-by: zhang wen <[email protected]>
Co-authored-by: zhang wen <[email protected]>

[HUDI-1448]  Hudi dla sync support skip rt table syncing (apache#2324)

[HUDI-1435] Fix bug in Marker File Reconciliation for Non-Partitioned datasets (apache#2301)

[MINOR] Improve code readability by passing in the fileComparisonsRDD in bloom index (apache#2319)

[HUDI-1376] Drop Hudi metadata cols at the beginning of Spark datasource writing (apache#2233)

Co-authored-by: Wenning Ding <[email protected]>

[MINOR] Fix error information in exception (apache#2341)

[MINOR] Make QuickstartUtil generate random timestamp instead of 0 (apache#2340)

[HUDI-1406] Add date partition based source input selector for Delta streamer (apache#2264)

- Adds ability to list only recent date based partitions from source data.
- Parallelizes listing for faster tailing of DFSSources

[HUDI-1437]  support more accurate spark JobGroup for better performance tracking (apache#2322)

[HUDI-1470] Use the latest writer schema, when reading from existing parquet files in the hudi-test-suite (apache#2344)

[HUDI-115] Adding DefaultHoodieRecordPayload to honor ordering with combineAndGetUpdateValue (apache#2311)

* Added ability to pass in `properties` to payload methods, so they can perform table/record specific merges
* Added default methods so existing payload classes are backwards compatible.
* Adding DefaultHoodiePayload to honor ordering while merging two records
* Fixing default payload based on feedback

[HUDI-1419] Add base implementation for hudi java client (apache#2286)

[MINOR] Pass root exception to HoodieKeyGeneratorException for more information (apache#2354)

Co-authored-by: Xi Chen <[email protected]>

[HUDI-1075] Implement simple clustering strategies to create ClusteringPlan and to run the plan

[HUDI-1471] Make QuickStartUtils generate deletes according to specific ts (apache#2357)

[HUDI-1485] Fix Deletes issued without any prior commits exception (apache#2361)

[HUDI-1488] Fix Test Case Failure in TestHBaseIndex (apache#2365)

[HUDI-1489] Fix null pointer exception when reading updated written bootstrap table (apache#2370)

Co-authored-by: Wenning Ding <[email protected]>

[HUDI-1451] Support bulk insert v2 with Spark 3.0.0 (apache#2328)

Co-authored-by: Wenning Ding <[email protected]>

- Added support for bulk insert v2 with datasource v2 api in Spark 3.0.0.

[HUDI-1487] fix unit test testCopyOnWriteStorage random failed (apache#2364)

[HUDI-1490] Incremental Query should work even when there are  partitions that have no incremental changes (apache#2371)

* Incremental Query should work even when there are  partitions that have no incremental changes

Co-authored-by: Sivabalan Narayanan <[email protected]>

[HUDI-1331] Adding support for validating entire dataset and long running tests in test suite framework (apache#2168)

* trigger rebuild

* [HUDI-1156] Remove unused dependencies from HoodieDeltaStreamerWrapper Class (apache#1927)

* Adding support for validating records and long running tests in test sutie framework

* Adding partial validate node

* Fixing spark session initiation in Validate nodes

* Fixing validation

* Adding hive table validation to ValidateDatasetNode

* Rebasing with latest commits from master

* Addressing feedback

* Addressing comments

Co-authored-by: lamber-ken <[email protected]>
Co-authored-by: linshan-ma <[email protected]>

[HUDI-1481]  add  structured streaming and delta streamer clustering unit test (apache#2360)

[HUDI-1354] Block updates and replace on file groups in clustering (apache#2275)

* [HUDI-1354] Block updates and replace on file groups in clustering

* [HUDI-1354]  Block updates and replace on file groups in clustering

[HUDI-1350] Support Partition level delete API in HUDI (apache#2254)

* [HUDI-1350] Support Partition level delete API in HUDI

* [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction

* [HUDI-1350] Support Partition level delete API in HUDI base InsertOverwriteCommitAction

[HUDI-1495] Upgrade Flink version to 1.12.0 (apache#2384)

[MINOR] Remove the duplicate code in AbstractHoodieWriteClient.startCommit (apache#2385)

[HUDI-1398] Align insert file size for reducing IO (apache#2256)

* [HUDI-1398] Align insert file size for reducing IO

Co-authored-by: zhang wen <[email protected]>

[HUDI-1484] Escape the partition value in HiveSyncTool (apache#2363)

[HUDI-1474] Add additional unit tests to TestHBaseIndex (apache#2349)

[HUDI-1147] Modify GenericRecordFullPayloadGenerator to generate vali… (apache#2045)

* [HUDI-1147] Modify GenericRecordFullPayloadGenerator to generate valid timestamps

Co-authored-by: Sivabalan Narayanan <[email protected]>

[HUDI-1493] Fixed schema compatibility check for fields. (apache#2350)

Some field types changes are allowed (e.g. int -> long) while maintaining schema backward compatibility within HUDI. The check was reversed with the reader schema being passed for the write schema.

[MINOR] Update report_coverage.sh (apache#2396)

[HUDI-1434] fix incorrect log file path in HoodieWriteStat (apache#2300)

* [HUDI-1434] fix incorrect log file path in HoodieWriteStat

* HoodieWriteHandle#close() returns a list of WriteStatus objs

* Handle rolled-over log files and return a WriteStatus per log file written

 - Combined data and delete block logging into a single call
 - Lazily initialize and manage write status based on returned AppendResult
 - Use FSUtils.getFileSize() to set final file size, consistent with other handles
 - Added tests around returned values in AppendResult
 - Added validation of the file sizes returned in write stat

Co-authored-by: Vinoth Chandar <[email protected]>

[HUDI-1418] Set up flink client unit test infra (apache#2281)

[MINOR] Sync UpsertPartitioner modify of HUDI-1398 to flink/java (apache#2390)

Co-authored-by: zhang wen <[email protected]>

[HUDI-1423] Support delete in hudi-java-client (apache#2353)

[MINOR] Add maven profile to support skipping shade sources jars (apache#2358)

Co-authored-by: Xi Chen <[email protected]>

[HUDI-842] Implementation of HUDI RFC-15.

 - Introduced an internal metadata table, that stores file listings.
 - metadata table is kept upto date with
 - Fixed handling of CleanerPlan.
 - [HUDI-842] Reduce parallelism to speed up the test.
 - [HUDI-842] Implementation of CLI commands for metadata operations and lookups.
 - [HUDI-842] Extend rollback metadata to include the files which have been appended to.
 - [HUDI-842] Support for rollbacks in MOR Table.
 - MarkerBasedRollbackStrategy needs to correctly provide the list of files for which rollback blocks were appended.
 - [HUDI-842] Added unit test for rollback of partial commits (inflight but not completed yet).
 - [HUDI-842] Handled the error case where metadata update succeeds but dataset commit fails.
 - [HUDI-842] Schema evolution strategy for Metadata Table. Each type of metadata saved (FilesystemMetadata, ColumnIndexMetadata, etc.) will be a separate field with default null. The type of the record will identify the valid field. This way, we can grow the schema when new type of information is saved within in which still keeping it backward compatible.
 - [HUDI-842] Fix non-partitioned case and speedup initial creation of metadata table.Choose only 1 partition for jsc as the number of records is low (hundreds to thousands). There is more overhead of creating large number of partitions for JavaRDD and it slows down operations like WorkloadProfile.
For the non-partitioned case, use "." as the name of the partition to prevent empty keys in HFile.
 - [HUDI-842] Reworked metrics pusblishing.
 - Code has been split into reader and writer side. HoodieMetadata code to be accessed by using HoodieTable.metadata() to get instance of metdata for the table.
Code is serializable to allow executors to use the functionality.
 - [RFC-15] Add metrics to track the time for each file system call.
 - [RFC-15] Added a distributed metrics registry for spark which can be used to collect metrics from executors. This helps create a stats dashboard which shows the metadata table improvements in real-time for production tables.
 - [HUDI-1321] Created HoodieMetadataConfig to specify configuration for the metadata table. This is safer than full-fledged properties for the metadata table (like HoodieWriteConfig) as it makes burdensome to tune the metadata. With limited configuration, we can control the performance of the metadata table closely.

[HUDI-1319][RFC-15] Adding interfaces for HoodieMetadata, HoodieMetadataWriter (apache#2266)
 - moved MetadataReader to HoodieBackedTableMetadata, under the HoodieTableMetadata interface
 - moved MetadataWriter to HoodieBackedTableMetadataWriter, under the HoodieTableMetadataWriter
 - Pulled all the metrics into HoodieMetadataMetrics
 - Writer now wraps the metadata, instead of extending it
 - New enum for MetadataPartitionType
 - Streamlined code flow inside HoodieBackedTableMetadataWriter w.r.t initializing metadata state
 - [HUDI-1319] Make async operations work with metadata table (apache#2332)
 - Changes the syncing model to only move over completed instants on data timeline
 - Syncing happens postCommit and on writeClient initialization
 - Latest delta commit on the metadata table is sufficient as the watermark for data timeline archival
 - Cleaning/Compaction use a suffix to the last instant written to metadata table, such that we keep the 1-1
 - .. mapping between data and metadata timelines.
 - Got rid of a lot of the complexity around checking for valid commits during open of base/log files
 - Tests now use local FS, to simulate more failure scenarios
 - Some failure scenarios exposed HUDI-1434, which is needed for MOR to work correctly

co-authored by: Vinoth Chandar <[email protected]>

[HUDI-1450] Use metadata table for listing in HoodieROTablePathFilter (apache#2326)

[HUDI-1394] [RFC-15] Use metadata table (if present) to get all partition paths (apache#2351)

[HUDI-1469] Faster initialization of metadata table using parallelized listing. (apache#2343)

 * [HUDI-1469] Faster initialization of metadata table using parallelized listing which finds partitions and files in a single scan.
 * MINOR fixes

Co-authored-by: Vinoth Chandar <[email protected]>

[HUDI-1325] [RFC-15] Merge updates of unsynced instants to metadata table (apache#2342)

[RFC-15] Fix partition key in metadata table when bootstrapping from file system (apache#2387)

Co-authored-by: Ryan Pifer <[email protected]>

[HUDI-1312] [RFC-15] Support for metadata listing for snapshot queries through Hive/SparkSQL (apache#2366)

Co-authored-by: Ryan Pifer <[email protected]>

[HUDI-1504] Allow log files generated during restore/rollback to be synced as well

 - TestHoodieBackedMetadata#testSync etc now run for MOR tables
 - HUDI-1502 is still pending and has issues for MOR/rollbacks
 - Also addressed bunch of code review comments.

[HUDI-1498] Read clustering plan from requested file for inflight instant (apache#2389)

[HUDI-1506] Fix wrong exception thrown in HoodieAvroUtils (apache#2405)

[HUDI-1383] Fixing sorting of partition vals for hive sync computation (apache#2402)

[HUDI-1507] Change timeline utils to support reading replacecommit metadata (apache#2407)

[MINOR] Rename unit test package of hudi-spark3 from scala to java (apache#2411)

[HUDI-1513] Introduce WriteClient#preWrite() and relocate metadata table syncing (apache#2413)

- Syncing to metadata table, setting operation type, starting async cleaner done in preWrite()
 - Fixes an issues where delete() was not starting async cleaner correctly
 - Fixed tests and enabled metadata table for TestAsyncCompaction

[HUDI-1510] Move HoodieEngineContext and its dependencies to hudi-common (apache#2410)

[MINOR] Sync HUDI-1196 to  FlinkWriteHelper (apache#2415)

[HUDI-1514] Avoid raw type use for parameter of Transformer interface (apache#2420)

[HUDI-920] Support Incremental query for MOR table (apache#1938)

[HUDI-1276] [HUDI-1459] Make Clustering/ReplaceCommit and Metadata table be compatible (apache#2422)

* [HUDI-1276] [HUDI-1459] Make Clustering/ReplaceCommit and Metadata table be compatible

* Use filesystemview and json format from metadata. Add tests

Co-authored-by: Satish Kotha <[email protected]>

[HUDI-1399] support a independent clustering spark job to asynchronously clustering (apache#2379)

* [HUDI-1481]  add  structured streaming and delta streamer clustering unit test

* [HUDI-1399] support a independent clustering spark job to asynchronously clustering

* [HUDI-1399]  support a  independent clustering spark job to asynchronously clustering

* [HUDI-1498] Read clustering plan from requested file for inflight instant (apache#2389)

* [HUDI-1399]  support  a independent clustering spark job with schedule generate instant time

Co-authored-by: satishkotha <[email protected]>

[MINOR] fix spark 3 build for incremental query on MOR (apache#2425)

[HUDI-1479] Use HoodieEngineContext to parallelize fetching of partiton paths (apache#2417)

* [HUDI-1479] Use HoodieEngineContext to parallelize fetching of partition paths

* Adding testClass for FileSystemBackedTableMetadata

Co-authored-by: Nishith Agarwal <[email protected]>

[HUDI-1520] add configure for spark sql overwrite use INSERT_OVERWRITE_TABLE (apache#2428)

[HUDI-1502] MOR rollback and restore support for metadata sync (apache#2421)

- Adds field to RollbackMetadata that capture the logs written for rollback blocks
- Adds field to RollbackMetadata that capture new logs files written by unsynced deltacommits

Co-authored-by: Vinoth Chandar <[email protected]>

Reviewers: O955 Project Hoodie Project Reviewer: Add blocking reviewers!, PHID-PROJ-pxfpotkfgkanblb3detq!, #ldap_hudi, modi

Reviewed By: #ldap_hudi, modi

Differential Revision: https://code.uberinternal.com/D5347141
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants