Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server] Create separate of RocksDBMemoryStats for RMD CF #1485

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

majisourav99
Copy link
Contributor

Create separate of RocksDBMemoryStats for RMD CF

Today block-cache usage are emitting combined value for value and RMD familties. This PR isolates the block-cache part of the metric with a separate name limiter.

How was this PR tested?

CI

Does this PR introduce any user-facing changes?

  • No. You can skip the rest of this section.
  • Yes. Make sure to explain your proposed changes and call out the behavior change. This PR will separate the blockcache metric of rocksdb to a separate metric.

registerSensor(new AsyncGauge((ignored, ignored2) -> {
Long total = 0L;
// Lock down the list of RocksDB interfaces while the collection is ongoing
synchronized (hostedRocksDBPartitions) {
Copy link
Contributor

@ZacAttack ZacAttack Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this block rebalance triggered state transitions? Could we just use a thread safe iterator?

Comment on lines 114 to 126
for (RocksDBStoragePartition dbPartition: hostedRocksDBPartitions.values()) {
if (!(dbPartition instanceof ReplicationMetadataRocksDBStoragePartition)) {
continue;
}
try {
total += dbPartition.getRocksDBStatValue(metric);
} catch (VeniceException e) {
LOGGER.warn("Could not get rocksDB metric {} with error:", metric, e);
continue;
}
if (INSTANCE_METRIC_DOMAINS.contains(metric)) {
// Collect this metric once from any available partition and move on
break;
}
Copy link
Contributor

@gaojieliu gaojieliu Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes don't look correct..

There are two sets of metrics: block cache metrics and others.
Block cache metrics should only be collected once and other metrics will aggregate among all the instance of storage partitions.

RMD is a column family and ReplicationMetadataRocksDBStoragePartition contains two CFs: defaultand timestamp_metadata, and I think the correct logic should as follows:

for (String metric: PARTITION_METRIC_DOMAINS) {
      // Skip the block cache related metrics when using PlainTable format is enabled.
      if (plainTableEnabled && BLOCK_CACHE_METRICS.contains(metric)) {
        continue;
      }
      registerSensor(new AsyncGauge((ignored, ignored2) -> {
        Long total = 0L;
        // Lock down the list of RocksDB interfaces while the collection is ongoing
        synchronized (hostedRocksDBPartitions) {
          for (RocksDBStoragePartition dbPartition: hostedRocksDBPartitions.values()) {
            try {
              total += dbPartition.getRocksDBStatValue(metric, REPLICATION_METADATA_COLUMN_FAMILY);
            } catch (VeniceException e) {
              LOGGER.warn("Could not get rocksDB metric {} with error:", metric, e);
              continue;
            }
            if (INSTANCE_METRIC_DOMAINS.contains(metric)) {
              // Collect this metric once from any available partition and move on
              break;
            }
          }
        }
        return total;
      }, metric));
//==============New Code ===================
registerSensor(new AsyncGauge((ignored, ignored2) -> {
        Long total = 0L;
        // Lock down the list of RocksDB interfaces while the collection is ongoing
        synchronized (hostedRocksDBPartitions) {
          for (RocksDBStoragePartition dbPartition: hostedRocksDBPartitions.values()) {
            try {
             if (dbPartition instanceof ReplicationMetadataRocksDBStoragePartition) {
             // =========New code to add a new function to extract stat value for non-default CF.
                total += dbPartition.getRocksDBStatValue(metric, REPLICATION_METADATA_COLUMN_FAMILY);
              }
            } catch (VeniceException e) {
              LOGGER.warn("Could not get rocksDB metric {} with error:", metric, e);
              continue;
            }
            if (INSTANCE_METRIC_DOMAINS.contains(metric)) {
              // Collect this metric once from any available partition and move on
              break;
            }
          }
        }
        return total;
      }, "rmd-" + metric));
    }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I did not know they have a separate metric for CF
dbPartition.getRocksDBStatValue(metric, REPLICATION_METADATA_COLUMN_FAMILY);

@@ -89,6 +90,9 @@ public RocksDBMemoryStats(MetricsRepository metricsRepository, String name, bool
// Lock down the list of RocksDB interfaces while the collection is ongoing
synchronized (hostedRocksDBPartitions) {
for (RocksDBStoragePartition dbPartition: hostedRocksDBPartitions.values()) {
if (dbPartition instanceof ReplicationMetadataRocksDBStoragePartition) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why skipping?
IIUC, the instance of ReplicationMetadataRocksDBStoragePartition contains two CFs: default and RMD and here we should continue to call dbPartition.getRocksDBStatValue(metric); to measure the default CF usage, right?

That means we should remove this if statement.

continue;
}
}
if (INSTANCE_METRIC_DOMAINS.contains(metric)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we check whether separate block cache is enabled or not before emitting another set of block cache metrics for rmd CF?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants