Skip to content

Conversation

@jojochuang
Copy link
Contributor

What changes were proposed in this pull request?

HDDS-12518. Compact RocksDB after consecutive deletions

  • Keep a counter for each OM RocksDB column family. Every delete increments the counter.
  • added a background service CompactionService to periodically (default 5 minutes) check if a column family has accumulated 10,000 deletes, and schedule a full compaction if it does.
  • Using this heuristic, I was able to improve the deletion of a large directory from 20k directories per hour to 400k per hour on my machine.
  • The average rocksdb seek time managed to maintain below 2ms. (Without compactions, average db seek time would grow towards 20ms after several hours)
  • I also did a proof of concept to simply compact over and over again, in which cause it could increase to 1 million deletions per hour.
  • But that would be too intense if there are foreground traffic at the same time. So a less frequent approach is picked.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-12518

How was this patch tested?

  • New unit tests.
  • Deployed to a dev cluster to evaluate the performance.

jojochuang and others added 8 commits March 6, 2025 15:47
WIP

Change-Id: I5f14a8fafa6a66c2f8530ce1592bd8c9daf10f73

WIP

Change-Id: Icffb8f02d38836d45a6af4a620754c0619333b9e

WIP

Change-Id: I68774824a062ced73257d4769cf9f53fe6f1cfa8

Checkstyle

Change-Id: If90067ce9bc046a2a87308a0ebaae1590ebaee9d

Fix counter bug

Change-Id: I20a70336a8b4063e2fb6c727e0d624419536841b

Compact DeletedTable as well.

Change-Id: I1ca99a7f687e137581f7a161bf14ef2e84670acb

Fix tests

Change-Id: Ia03024fc364be05c031e4d3dc3fb7eb230a53b08

Added test in TestOMKeyPurgeRequestAndResponse.

Change-Id: I2cfe621d938e9ff74622c3952d9dd939b2c0763a

Fix test

Change-Id: I714cc6f3270776eb13f3ce2b6cba0aba012a47a8

Checkstyle

Change-Id: I497f44abc2ef39547d5601f52303f8b47ef3863c

Fix test

Change-Id: I65a48d777b2ea0fe590aa2934a99351a6bd920cb

Close ManagedCompactRangeOptions properly

Change-Id: I14e7ce2eb4765cc2cfbf2e1caeb8910ce601b590

Shorten compaction threshold and interval

Change-Id: I7be18c12a23469da6362be16c1938489fe9f597c

Refactor, move counters to CompactionService.

Change-Id: I80fba6fe68b5d9f918e9da05eb2d10e7990d5741
(cherry picked from commit 823109590b4c6120b71e4dae93b5243eed559b2a)

 Conflicts:
	hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMDirectoriesPurgeRequestWithFSO.java
	hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyPurgeRequest.java
	hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMDirectoriesPurgeResponseWithFSO.java
	hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyPurgeResponse.java
	hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/service/DirectoryDeletingService.java
	hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMDirectoriesPurgeRequestAndResponse.java
	hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyPurgeRequestAndResponse.java
	hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyRequest.java

Change-Id: I0c7f07bf9b893b9bac2bf6b918d32214f7dab1a2

Update

Change-Id: Ie58d0b44f0dfef8a6555d3c286104239d938d24a
…onService check uncompacted deletes periodically.

Change-Id: I606abc7240cb1ff81b041b37eeff9800c6ea96a9
Change-Id: Ieb39c536c06bb90dbea0f67d3a7a69b8d38bde68
Change-Id: I6bded13066ae8e503c273cecb7f821be5d1d80a5
Change-Id: Ie67717d22bddc4f8012ad700c9c7b311cfe8445e
Copy link
Contributor

@errose28 errose28 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this with benchmarks @jojochuang. Should we add some metrics for this? Even just the number of compactions that were initiated per run and how many uncompacted keys there were total would help. I'm not sure if RocksDB metrics will differentiate between compactions Ozone initiates vs. those that happen automatically.

Tracking things like runtime of the service and number of timeouts would be good too, but another proposal is that this should be done in the BackgroundService parent class, and child classes should just provide a name for their metrics. In this case we would not add it here.

Having per-column family metrics on num uncompacted deletions and compactions done would be nice too, but splitting them out might be too bulky.

private final AtomicLong numCompactions;
private final AtomicBoolean suspended;
private final long compactionThreshold;
// list of tables that can be compacted
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not compact all tables that exceed the threshold? The counter is already being tracked at for every column family.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I was under the impression that snapshot related tables shouldn't be compacted. But I could be wrong. @swamirishi thoughts on this?

Also, It's based on my observation of the most active tables. I can imagine multipartInfoTable would benefit from this too.

Copy link
Contributor

@Tejaskriya Tejaskriya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @jojochuang . Just a small question. Would 5mins interval be the best default for the compaction service? As compaction is a heavy operation would a lesser frequent interval be better?

Copy link
Member

@peterxcli peterxcli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this improvement, left some comments. please take them with a grain of salt.

Out of topic suggestion: Maybe this approach could also be proposed to the RocksDB community for native support of something like a 'Max Time Compaction Boundary' or 'Max Deleted Key Compaction Boundary.'"

Comment on lines +129 to +132
// Find CF Handler
RocksDatabase rocksDatabase = ((RDBStore) omMetadataManager.getStore()).getDb();
RocksDatabase.ColumnFamily columnFamily = rocksDatabase.getColumnFamily(tableName);
rocksDatabase.compactRange(columnFamily, null, null, options);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this into RDBStore#compactRange(string tableName, Option opt)?

omResponse, deletedKeyNames, snapInfo, null);
omKeyPurgeResponse.addToDBBatch(omMetadataManager, batchOperation);
assertEquals(deletedKeyNames.size(),
((TypedTable)this.omMetadataManager.getTable(DELETED_TABLE)).getUncompactedDeletes().get());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
((TypedTable)this.omMetadataManager.getTable(DELETED_TABLE)).getUncompactedDeletes().get());
((TypedTable)this.omMetadataManager.getDeletedTable()).getUncompactedDeletes().get());

Comment on lines +364 to +368
((TypedTable)this.omMetadataManager.getTable(FILE_TABLE)).getUncompactedDeletes().get());
assertEquals(0,
((TypedTable)this.omMetadataManager.getTable(DELETED_DIR_TABLE)).getUncompactedDeletes().get());
assertEquals(0,
((TypedTable)this.omMetadataManager.getTable(DIRECTORY_TABLE)).getUncompactedDeletes().get());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

LOG.debug("Running CompactTask");

compactFully(tableName);
numCompactions.incrementAndGet();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems only used for test, how about move this logic into test with overrided CompactionService instance.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then replace this with some metrics.

@peterxcli
Copy link
Member

Just found tikv has similar configuration for this.
see: tikv/tikv#2631

They have three config to control auto compaction:

  • region_compact_check_interval: Interval (ms) to check whether start compaction for a region.
  • region_compact_check_step: Number of regions for each time checking. This seems not related to ozone usecase.
  • region_compact_min_tombstones: Minimum number of tombstones to trigger manual compaction.

cc @jojochuang

@jojochuang
Copy link
Contributor Author

Close it due to #8260 /HDDS-12819 partially overlap in the scope.
Peter's another proposal to do partial compaction would be a far better solution, too.

@jojochuang jojochuang closed this Apr 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants