-
Notifications
You must be signed in to change notification settings - Fork 588
HDDS-12518. Compact RocksDB after consecutive deletions #8029
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WIP Change-Id: I5f14a8fafa6a66c2f8530ce1592bd8c9daf10f73 WIP Change-Id: Icffb8f02d38836d45a6af4a620754c0619333b9e WIP Change-Id: I68774824a062ced73257d4769cf9f53fe6f1cfa8 Checkstyle Change-Id: If90067ce9bc046a2a87308a0ebaae1590ebaee9d Fix counter bug Change-Id: I20a70336a8b4063e2fb6c727e0d624419536841b Compact DeletedTable as well. Change-Id: I1ca99a7f687e137581f7a161bf14ef2e84670acb Fix tests Change-Id: Ia03024fc364be05c031e4d3dc3fb7eb230a53b08 Added test in TestOMKeyPurgeRequestAndResponse. Change-Id: I2cfe621d938e9ff74622c3952d9dd939b2c0763a Fix test Change-Id: I714cc6f3270776eb13f3ce2b6cba0aba012a47a8 Checkstyle Change-Id: I497f44abc2ef39547d5601f52303f8b47ef3863c Fix test Change-Id: I65a48d777b2ea0fe590aa2934a99351a6bd920cb Close ManagedCompactRangeOptions properly Change-Id: I14e7ce2eb4765cc2cfbf2e1caeb8910ce601b590 Shorten compaction threshold and interval Change-Id: I7be18c12a23469da6362be16c1938489fe9f597c Refactor, move counters to CompactionService. Change-Id: I80fba6fe68b5d9f918e9da05eb2d10e7990d5741 (cherry picked from commit 823109590b4c6120b71e4dae93b5243eed559b2a) Conflicts: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMDirectoriesPurgeRequestWithFSO.java hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyPurgeRequest.java hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMDirectoriesPurgeResponseWithFSO.java hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyPurgeResponse.java hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/DirectoryDeletingService.java hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMDirectoriesPurgeRequestAndResponse.java hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyPurgeRequestAndResponse.java hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRequest.java Change-Id: I0c7f07bf9b893b9bac2bf6b918d32214f7dab1a2 Update Change-Id: Ie58d0b44f0dfef8a6555d3c286104239d938d24a
…onService check uncompacted deletes periodically. Change-Id: I606abc7240cb1ff81b041b37eeff9800c6ea96a9
Change-Id: I6bded13066ae8e503c273cecb7f821be5d1d80a5
Change-Id: Ie67717d22bddc4f8012ad700c9c7b311cfe8445e
errose28
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this with benchmarks @jojochuang. Should we add some metrics for this? Even just the number of compactions that were initiated per run and how many uncompacted keys there were total would help. I'm not sure if RocksDB metrics will differentiate between compactions Ozone initiates vs. those that happen automatically.
Tracking things like runtime of the service and number of timeouts would be good too, but another proposal is that this should be done in the BackgroundService parent class, and child classes should just provide a name for their metrics. In this case we would not add it here.
Having per-column family metrics on num uncompacted deletions and compactions done would be nice too, but splitting them out might be too bulky.
| private final AtomicLong numCompactions; | ||
| private final AtomicBoolean suspended; | ||
| private final long compactionThreshold; | ||
| // list of tables that can be compacted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not compact all tables that exceed the threshold? The counter is already being tracked at for every column family.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I was under the impression that snapshot related tables shouldn't be compacted. But I could be wrong. @swamirishi thoughts on this?
Also, It's based on my observation of the most active tables. I can imagine multipartInfoTable would benefit from this too.
Tejaskriya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @jojochuang . Just a small question. Would 5mins interval be the best default for the compaction service? As compaction is a heavy operation would a lesser frequent interval be better?
peterxcli
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this improvement, left some comments. please take them with a grain of salt.
Out of topic suggestion: Maybe this approach could also be proposed to the RocksDB community for native support of something like a 'Max Time Compaction Boundary' or 'Max Deleted Key Compaction Boundary.'"
| // Find CF Handler | ||
| RocksDatabase rocksDatabase = ((RDBStore) omMetadataManager.getStore()).getDb(); | ||
| RocksDatabase.ColumnFamily columnFamily = rocksDatabase.getColumnFamily(tableName); | ||
| rocksDatabase.compactRange(columnFamily, null, null, options); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this into RDBStore#compactRange(string tableName, Option opt)?
| omResponse, deletedKeyNames, snapInfo, null); | ||
| omKeyPurgeResponse.addToDBBatch(omMetadataManager, batchOperation); | ||
| assertEquals(deletedKeyNames.size(), | ||
| ((TypedTable)this.omMetadataManager.getTable(DELETED_TABLE)).getUncompactedDeletes().get()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
| ((TypedTable)this.omMetadataManager.getTable(DELETED_TABLE)).getUncompactedDeletes().get()); | |
| ((TypedTable)this.omMetadataManager.getDeletedTable()).getUncompactedDeletes().get()); |
| ((TypedTable)this.omMetadataManager.getTable(FILE_TABLE)).getUncompactedDeletes().get()); | ||
| assertEquals(0, | ||
| ((TypedTable)this.omMetadataManager.getTable(DELETED_DIR_TABLE)).getUncompactedDeletes().get()); | ||
| assertEquals(0, | ||
| ((TypedTable)this.omMetadataManager.getTable(DIRECTORY_TABLE)).getUncompactedDeletes().get()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
| LOG.debug("Running CompactTask"); | ||
|
|
||
| compactFully(tableName); | ||
| numCompactions.incrementAndGet(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems only used for test, how about move this logic into test with overrided CompactionService instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then replace this with some metrics.
|
Just found tikv has similar configuration for this. They have three config to control auto compaction:
cc @jojochuang |
|
Close it due to #8260 /HDDS-12819 partially overlap in the scope. |
What changes were proposed in this pull request?
HDDS-12518. Compact RocksDB after consecutive deletions
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-12518
How was this patch tested?