Skip to content

Conversation

@sbernauer
Copy link
Contributor

What is the purpose of the pull request

We monitor our running instances of DeltaStreamer. We currently have the problem that we can't see from our monitoring if the DeltaStreamer is not doing any commits because it has problems or because there are simply no events to consume.

Brief change log

This PR adds a metric lastSync that is similar to commitTime but increments after every successful sync and not only a successful commit.

Verify this pull request

Look for the metric <topic>_deltastreamer_lastSync.

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

@hudi-bot
Copy link
Collaborator

hudi-bot commented Jun 22, 2021

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run travis re-run the last Travis build
  • @hudi-bot run azure re-run the last Azure build

@codecov-commenter
Copy link

codecov-commenter commented Jun 22, 2021

Codecov Report

Merging #3129 (48404f0) into master (cdb9b48) will decrease coverage by 1.71%.
The diff coverage is 50.00%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master    #3129      +/-   ##
============================================
- Coverage     45.85%   44.14%   -1.72%     
+ Complexity     5269     4535     -734     
============================================
  Files           908      819      -89     
  Lines         39332    36173    -3159     
  Branches       4239     3920     -319     
============================================
- Hits          18036    15967    -2069     
+ Misses        19451    18472     -979     
+ Partials       1845     1734     -111     
Flag Coverage Δ
hudicli 39.95% <ø> (ø)
hudiclient 16.44% <ø> (-14.00%) ⬇️
hudicommon 47.58% <ø> (+<0.01%) ⬆️
hudiflink 61.08% <ø> (-0.26%) ⬇️
hudihadoopmr 51.29% <ø> (ø)
hudisparkdatasource 67.05% <ø> (+0.52%) ⬆️
hudisync 51.73% <ø> (ø)
huditimelineservice 64.36% <ø> (ø)
hudiutilities 58.36% <50.00%> (+1.72%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...ties/deltastreamer/HoodieDeltaStreamerMetrics.java 35.89% <33.33%> (-0.22%) ⬇️
...apache/hudi/utilities/deltastreamer/DeltaSync.java 70.94% <100.00%> (+0.09%) ⬆️
...e/hudi/sink/partitioner/profile/WriteProfiles.java 50.00% <0.00%> (-2.50%) ⬇️
...i/common/table/timeline/HoodieDefaultTimeline.java 79.22% <0.00%> (-1.30%) ⬇️
...java/org/apache/hudi/sink/StreamWriteFunction.java 84.34% <0.00%> (-0.51%) ⬇️
...c/main/java/org/apache/hudi/util/StreamerUtil.java 55.00% <0.00%> (-0.47%) ⬇️
...va/org/apache/hudi/metadata/BaseTableMetadata.java 0.00% <0.00%> (ø)
...apache/hudi/sink/compact/HoodieFlinkCompactor.java 0.00% <0.00%> (ø)
...pache/hudi/common/config/HoodieMetadataConfig.java 0.00% <0.00%> (ø)
...pache/hudi/metadata/HoodieBackedTableMetadata.java 0.00% <0.00%> (ø)
... and 111 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cdb9b48...48404f0. Read the comment docs.

@yanghua
Copy link
Contributor

yanghua commented Jun 22, 2021

@sbernauer Thanks for your contribution. Please follow the contribution guide and file a jira ticket.

@sbernauer sbernauer changed the title Added metric for time of lastSync [HUDI-2055] Added metric for time of lastSync Jun 22, 2021
@sbernauer
Copy link
Contributor Author

Thanks @yanghua for the feedback! I've created https://issues.apache.org/jira/browse/HUDI-2055

@vinothchandar
Copy link
Member

@sbernauer this PR is good. but curious, if you are unable to use the basic commit metrics that are emitted from the write client level?

cc @prashantwason @n3nash @satishkotha Upstreaming the grafana dashboard would help a lot of people, seems like

@sbernauer
Copy link
Contributor Author

Hi @vinothchandar
we have around 200 Deltastreamer instances running. They range from multiple thousand Events per second to an Event every few months. Also some Deltastreamers consume from kafka topics where currently no events get emitted to (e.g. they are only for high availability if a data center goes down). But it is important to know from a monitoring perspective if the application is running and syncing regardless if there are any events to commit.

@sbernauer
Copy link
Contributor Author

Hi together, it would be great if we could get this merged!
Thanks in advance,
Sebastian

Copy link
Member

@vinothchandar vinothchandar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your reasoning makes sense.

FYI: We have landed a grafana dashboard that uber uses to monitor other metrics. It's using M3 etc, so may need some work. once cleaned up hopefully, we have a more straight forward monitoring story.

srcRecordsWithCkpt.getRight().getLeft(), metrics, overallTimerContext);
}

metrics.updateDeltaStreamerSyncMetrics(System.currentTimeMillis());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just confirming that you don't want to emit a metric if there was an exception and it errored out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to not update the metrics if an error occurred. Personally i just need a metric that also updates when a sync happens but no commit because of missing new data.
As i understand this patch is wrong as it also updates the metrics if something fails, am i right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If something fails an exception should be raised from this function and the metrics.updateDeltaStreamerSyncMetrics(...) function wont be called.

  1. If the sync runs but fails due to exception - no metric would be published.
  2. If the sync runs but there is no data to sync - metric will be published.

If lastSync is not updating then the process failed with an exception.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This patch is okay as long as you can set your alerts to catch - no sync metrics being emitted for a period of time.

Another way to monitor would be to emit a 1 for success and 0 for failure. And it lets you set alerts on a rolling window aggregate.

I have done both personally. Cant say one is better over other by a lot

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your feedback! If you can confirm that an error throws an exception it works as i intended.
Currently we monitor the commitTime metric. The problem being that we cannot distinguish between ingestion stopped working and there are simply no records to ingest.

@vinothchandar vinothchandar merged commit 05d6e18 into apache:master Jul 6, 2021
Samrat002 pushed a commit to Samrat002/hudi that referenced this pull request Jul 15, 2021
change the insret overwrte return type

[HUDI-1860] Test wrapper for insert_overwrite and insert_overwrite_table

[HUDI-2084] Resend the uncommitted write metadata when start up (apache#3168)

Co-authored-by: 喻兆靖 <[email protected]>

[HUDI-2081] Move schema util tests out from TestHiveSyncTool (apache#3166)

[HUDI-2094] Supports hive style partitioning for flink writer (apache#3178)

[HUDI-2097] Fix Flink unable to read commit metadata error (apache#3180)

[HUDI-2085] Support specify compaction paralleism and compaction target io for flink batch compaction (apache#3169)

[HUDI-2092] Fix NPE caused by FlinkStreamerConfig#writePartitionUrlEncode null value (apache#3176)

[HUDI-2006] Adding more yaml templates to test suite (apache#3073)

[HUDI-2103] Add rebalance before index bootstrap (apache#3185)

Co-authored-by: 喻兆靖 <[email protected]>

[HUDI-1944] Support Hudi to read from committed offset (apache#3175)

* [HUDI-1944] Support Hudi to read from committed offset

* [HUDI-1944] Adding group option to KafkaResetOffsetStrategies

* [HUDI-1944] Update Exception msg

[HUDI-2052] Support load logFile in BootstrapFunction (apache#3134)

Co-authored-by: 喻兆靖 <[email protected]>

[HUDI-89] Add configOption & refactor all configs based on that (apache#2833)

Co-authored-by: Wenning Ding <[email protected]>

[MINOR] Update .asf.yaml to codify notification settings, turn on jira comments, gh discussions (apache#3164)

- Turn on comment for jira, so we can track PR activity better
- Create a notification settings that match https://gitbox.apache.org/schemes.cgi?hudi
- Try and turn on "discussions" on Github, to experiment

[MINOR] Fix broken build due to FlinkOptions (apache#3198)

[HUDI-2088] Missing Partition Fields And PreCombineField In Hoodie Properties For Table Written By Flink (apache#3171)

[MINOR] Add Documentation to KEYGENERATOR_TYPE_PROP (apache#3196)

[HUDI-2105] Compaction Failed For MergeInto MOR Table (apache#3190)

[HUDI-2051] Enable Hive Sync When Spark Enable Hive Meta  For Spark Sql (apache#3126)

[HUDI-2112] Support reading pure logs file group for flink batch reader after compaction (apache#3202)

[HUDI-2114] Spark Query MOR Table Written By Flink Return Incorrect Timestamp Value (apache#3208)

[HUDI-2121] Add operator uid for flink stateful operators (apache#3212)

[HUDI-2123]  Exception When Merge With Null-Value Field (apache#3214)

[HUDI-2124] A Grafana dashboard for HUDI. (apache#3216)

[HUDI-2057]  CTAS Generate An External Table When Create Managed Table (apache#3146)

[HUDI-1930] Bootstrap support configure KeyGenerator by type (apache#3170)

* [HUDI-1930] Bootstrap support configure KeyGenerator by type

[HUDI-2116] Support batch synchronization of partition datas to  hive metastore to avoid oom problem (apache#3209)

[HUDI-2126] The coordinator send events to write function when there are no data for the checkpoint (apache#3219)

[HUDI-2127] Initialize the maxMemorySizeInBytes in log scanner (apache#3220)

[HUDI-2058]support incremental query for insert_overwrite_table/insert_overwrite operation on cow table (apache#3139)

[HUDI-2129] StreamerUtil.medianInstantTime should return a valid date time string (apache#3221)

[HUDI-2131] Exception Throw Out When MergeInto With Decimal Type Field (apache#3224)

[HUDI-2122] Improvement in packaging insert into smallfiles (apache#3213)

[HUDI-2132] Make coordinator events as POJO for efficient serialization (apache#3223)

[HUDI-2106] Fix flink batch compaction bug while user don't set compaction tasks (apache#3192)

[HUDI-2133] Support hive1 metadata sync for flink writer (apache#3225)

[HUDI-2089]fix the bug that metatable cannot support non_partition table (apache#3182)

[HUDI-2028] Implement RockDbBasedMap as an alternate to DiskBasedMap in ExternalSpillableMap (apache#3194)

Co-authored-by: Rajesh Mahindra <[email protected]>

[HUDI-2135] Add compaction schedule option for flink (apache#3226)

[HUDI-2055] Added deltastreamer metric for time of lastSync (apache#3129)

[HUDI-2046] Loaded too many classes like sun/reflect/GeneratedSerializationConstructorAccessor in JVM metaspace (apache#3121)

Loaded too many classes when use kryo of spark to hudi

Co-authored-by: weiwei.duan <[email protected]>

[HUDI-1996] Adding functionality to allow the providing of basic auth creds for confluent cloud schema registry (apache#3097)

* adding support for basic auth with confluent cloud schema registry

[HUDI-2093] Fix empty avro schema path caused by duplicate parameters (apache#3177)

* [HUDI-2093] Fix empty avro schema path caused by duplicate parameters

* rename shcmea option key

* fix doc

* rename var name

[HUDI-2113] Fix integration testing failure caused by sql results out of order (apache#3204)

[HUDI-2016] Fixed bootstrap of Metadata Table when some actions are in progress. (apache#3083)

Metadata Table cannot be bootstrapped when any action is in progress. This is detected by the presence of inflight or requested instants. The bootstrapping is initiated in preWrite and postWrite of each commit. So bootstrapping will be retried again until it succeeds.
Also added metrics for when the bootstrapping fails or a table is re-bootstrapped. This will help detect tables which are not getting bootstrapped.

[HUDI-2140] Fixed the unit test TestHoodieBackedMetadata.testOnlyValidPartitionsAdded. (apache#3234)

[HUDI-2115] FileSlices in the filegroup is not descending by timestamp (apache#3206)

[HUDI-1104] Adding support for UserDefinedPartitioners and SortModes to BulkInsert with Rows (apache#3149)

[HUDI-2069] Refactored String constants (apache#3172)

[HUDI-1105] Adding dedup support for Bulk Insert w/ Rows (apache#2206)

[HUDI-2134]Add generics to avoif forced conversion in BaseSparkCommitActionExecutor#partition (apache#3232)

[HUDI-2009] Fixing extra commit metadata in row writer path (apache#3075)

[HUDI-2099]hive lock which state is WATING should be released, otherwise this hive lock will be locked forever (apache#3186)

[MINOR] Fix build broken from apache#3186 (apache#3245)

[HUDI-2136] Fix conflict when flink-sql-connector-hive and hudi-flink-bundle are both in flink lib (apache#3227)

[HUDI-2087] Support Append only in Flink stream (apache#3174)

Co-authored-by: 喻兆靖 <[email protected]>

UnitTest for deltaSync

Removing cosmetic changes and reuse function for insert_overwrite_table

unit test

intial unit test for the insert_overwrite and insert_over_write_table

Adding failed test code for insert_overwrite

Revert "[HUDI-2087] Support Append only in Flink stream (apache#3174)" (apache#3251)

This reverts commit 3715267.

[HUDI-2147] Remove unused class AvroConvertor in hudi-flink (apache#3243)

[MINOR] Fix some wrong assert reasons (apache#3248)

[HUDI-2087] Support Append only in Flink stream (apache#3252)

Co-authored-by: 喻兆靖 <[email protected]>

[HUDI-2143] Tweak the default compaction target IO to 500GB when flink async compaction is off (apache#3238)

[HUDI-2142] Support setting bucket assign parallelism for flink write task (apache#3239)

[HUDI-1483] Support async clustering for deltastreamer and Spark streaming (apache#3142)

- Integrate async clustering service with HoodieDeltaStreamer and HoodieStreamingSink
- Added methods in HoodieAsyncService to reuse code

[HUDI-2107] Support Read Log Only MOR Table For Spark (apache#3193)

[HUDI-2144]Bug-Fix:Offline clustering(HoodieClusteringJob) will cause insert action losing data (apache#3240)

* fixed

* add testUpsertPartitionerWithSmallFileHandlingAndClusteringPlan ut

* fix CheckStyle

Co-authored-by: yuezhang <[email protected]>

[MINOR] Fix EXTERNAL_RECORD_AND_SCHEMA_TRANSFORMATION config (apache#3250)

[HUDI-2168] Fix for AccessControlException for anonymous user (apache#3264)

[HUDI-2045] Support Read Hoodie As DataSource Table For Flink And DeltaStreamer

test with insert-overwrite and insert-overwrite-table

removing hardcoded action to pass the unit test

[HUDI-1969] Support reading logs for MOR Hive rt table (apache#3033)

[HUDI-2171] Add parallelism conf for bootstrap operator

using delta-commit for insert_overwrite
ghost pushed a commit to shivagowda/hudi that referenced this pull request Jul 15, 2021
ghost pushed a commit to shivagowda/hudi that referenced this pull request Jul 31, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants