diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index f467c80a7060..af9958941fac 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -20,7 +20,8 @@ We welcome contributions of: * Unit Tests (JUnit / Java) * Acceptance Tests (docker + robot framework) * Blockade tests (python + blockade) - * Performance: We have multiple type of load generator / benchmark tools (`ozone freon`, `ozone genesis`), which can be used to test cluster and report problems. + * Performance: We have multiple type of load generator / benchmark tools (`ozone freon`), + which can be used to test cluster and report problems. * **Bug reports** pointing out broken functionality, docs, or suggestions for improvements are always welcome! ## Who To Contact diff --git a/hadoop-hdds/docs/content/tools/TestTools.md b/hadoop-hdds/docs/content/tools/TestTools.md index ac025f0a3217..83b40cb5f3d1 100644 --- a/hadoop-hdds/docs/content/tools/TestTools.md +++ b/hadoop-hdds/docs/content/tools/TestTools.md @@ -106,131 +106,4 @@ Average Time spent in key write: 00:00:10,894 Total bytes written: 10240000 Total Execution time: 00:00:16,898 *********************** -``` - -## Genesis - -Genesis is a microbenchmarking tool. It's also included in the distribution (`ozone genesis`) but it doesn't require real cluster. It measures different part of the code in an isolated way (eg. the code which saves the data to the local RocksDB based key value stores) - -Example run: - -``` - ozone genesis -benchmark=BenchMarkRocksDbStore -# JMH version: 1.19 -# VM version: JDK 11.0.1, VM 11.0.1+13-LTS -# VM invoker: /usr/lib/jvm/java-11-openjdk-11.0.1.13-3.el7_6.x86_64/bin/java -# VM options: -Dproc_genesis -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender -# Warmup: 2 iterations, 1 s each -# Measurement: 20 iterations, 1 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time -# Benchmark: org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test -# Parameters: (backgroundThreads = 4, blockSize = 8, maxBackgroundFlushes = 4, maxBytesForLevelBase = 512, maxOpenFiles = 5000, maxWriteBufferNumber = 16, writeBufferSize = 64) - -# Run progress: 0.00% complete, ETA 00:00:22 -# Fork: 1 of 1 -# Warmup Iteration 1: 213775.360 ops/s -# Warmup Iteration 2: 32041.633 ops/s -Iteration 1: 196342.348 ops/s - ?stack: - -Iteration 2: 41926.816 ops/s - ?stack: - -Iteration 3: 210433.231 ops/s - ?stack: - -Iteration 4: 46941.951 ops/s - ?stack: - -Iteration 5: 212825.884 ops/s - ?stack: - -Iteration 6: 145914.351 ops/s - ?stack: - -Iteration 7: 141838.469 ops/s - ?stack: - -Iteration 8: 205334.438 ops/s - ?stack: - -Iteration 9: 163709.519 ops/s - ?stack: - -Iteration 10: 162494.608 ops/s - ?stack: - -Iteration 11: 199155.793 ops/s - ?stack: - -Iteration 12: 209679.298 ops/s - ?stack: - -Iteration 13: 193787.574 ops/s - ?stack: - -Iteration 14: 127004.147 ops/s - ?stack: - -Iteration 15: 145511.080 ops/s - ?stack: - -Iteration 16: 223433.864 ops/s - ?stack: - -Iteration 17: 169752.665 ops/s - ?stack: - -Iteration 18: 165217.191 ops/s - ?stack: - -Iteration 19: 191038.476 ops/s - ?stack: - -Iteration 20: 196335.579 ops/s - ?stack: - - - -Result "org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test": - 167433.864 ?(99.9%) 43530.883 ops/s [Average] - (min, avg, max) = (41926.816, 167433.864, 223433.864), stdev = 50130.230 - CI (99.9%): [123902.981, 210964.748] (assumes normal distribution) - -Secondary result "org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test:?stack": -Stack profiler: - -....[Thread state distributions].................................................................... - 78.9% RUNNABLE - 20.0% TIMED_WAITING - 1.1% WAITING - -....[Thread state: RUNNABLE]........................................................................ - 59.8% 75.8% org.rocksdb.RocksDB.put - 16.5% 20.9% org.rocksdb.RocksDB.get - 0.7% 0.9% java.io.UnixFileSystem.delete0 - 0.7% 0.9% org.rocksdb.RocksDB.disposeInternal - 0.3% 0.4% java.lang.Long.formatUnsignedLong0 - 0.1% 0.2% org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test - 0.1% 0.1% java.lang.Long.toUnsignedString0 - 0.1% 0.1% org.apache.hadoop.ozone.genesis.generated.BenchMarkRocksDbStore_test_jmhTest.test_thrpt_jmhStub - 0.0% 0.1% java.lang.Object.clone - 0.0% 0.0% java.lang.Thread.currentThread - 0.4% 0.5% - -....[Thread state: TIMED_WAITING]................................................................... - 20.0% 100.0% java.lang.Object.wait - -....[Thread state: WAITING]......................................................................... - 1.1% 100.0% jdk.internal.misc.Unsafe.park - - - -# Run complete. Total time: 00:00:38 - -Benchmark (backgroundThreads) (blockSize) (maxBackgroundFlushes) (maxBytesForLevelBase) (maxOpenFiles) (maxWriteBufferNumber) (writeBufferSize) Mode Cnt Score Error Units -BenchMarkRocksDbStore.test 4 8 4 512 5000 16 64 thrpt 20 167433.864 ? 43530.883 ops/s -BenchMarkRocksDbStore.test:?stack 4 8 4 512 5000 16 64 thrpt NaN --- -``` +``` \ No newline at end of file diff --git a/hadoop-hdds/docs/content/tools/TestTools.zh.md b/hadoop-hdds/docs/content/tools/TestTools.zh.md index c6dfd2cf6160..df02389c8a08 100644 --- a/hadoop-hdds/docs/content/tools/TestTools.zh.md +++ b/hadoop-hdds/docs/content/tools/TestTools.zh.md @@ -107,131 +107,4 @@ Average Time spent in key write: 00:00:10,894 Total bytes written: 10240000 Total Execution time: 00:00:16,898 *********************** -``` - -## Genesis - -Genesis 是一个微型的基准测试工具,它也包含在发行包中(`ozone genesis`),但是它不需要一个真实的集群,而是采用一种隔离的方法测试不同部分的代码(比如,将数据存储到本地基于 RocksDB 的键值存储中)。 - -运行示例: - -``` - ozone genesis -benchmark=BenchMarkRocksDbStore -# JMH version: 1.19 -# VM version: JDK 11.0.1, VM 11.0.1+13-LTS -# VM invoker: /usr/lib/jvm/java-11-openjdk-11.0.1.13-3.el7_6.x86_64/bin/java -# VM options: -Dproc_genesis -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender -# Warmup: 2 iterations, 1 s each -# Measurement: 20 iterations, 1 s each -# Timeout: 10 min per iteration -# Threads: 4 threads, will synchronize iterations -# Benchmark mode: Throughput, ops/time -# Benchmark: org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test -# Parameters: (backgroundThreads = 4, blockSize = 8, maxBackgroundFlushes = 4, maxBytesForLevelBase = 512, maxOpenFiles = 5000, maxWriteBufferNumber = 16, writeBufferSize = 64) - -# Run progress: 0.00% complete, ETA 00:00:22 -# Fork: 1 of 1 -# Warmup Iteration 1: 213775.360 ops/s -# Warmup Iteration 2: 32041.633 ops/s -Iteration 1: 196342.348 ops/s - ?stack: - -Iteration 2: 41926.816 ops/s - ?stack: - -Iteration 3: 210433.231 ops/s - ?stack: - -Iteration 4: 46941.951 ops/s - ?stack: - -Iteration 5: 212825.884 ops/s - ?stack: - -Iteration 6: 145914.351 ops/s - ?stack: - -Iteration 7: 141838.469 ops/s - ?stack: - -Iteration 8: 205334.438 ops/s - ?stack: - -Iteration 9: 163709.519 ops/s - ?stack: - -Iteration 10: 162494.608 ops/s - ?stack: - -Iteration 11: 199155.793 ops/s - ?stack: - -Iteration 12: 209679.298 ops/s - ?stack: - -Iteration 13: 193787.574 ops/s - ?stack: - -Iteration 14: 127004.147 ops/s - ?stack: - -Iteration 15: 145511.080 ops/s - ?stack: - -Iteration 16: 223433.864 ops/s - ?stack: - -Iteration 17: 169752.665 ops/s - ?stack: - -Iteration 18: 165217.191 ops/s - ?stack: - -Iteration 19: 191038.476 ops/s - ?stack: - -Iteration 20: 196335.579 ops/s - ?stack: - - - -Result "org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test": - 167433.864 ?(99.9%) 43530.883 ops/s [Average] - (min, avg, max) = (41926.816, 167433.864, 223433.864), stdev = 50130.230 - CI (99.9%): [123902.981, 210964.748] (assumes normal distribution) - -Secondary result "org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test:?stack": -Stack profiler: - -....[Thread state distributions].................................................................... - 78.9% RUNNABLE - 20.0% TIMED_WAITING - 1.1% WAITING - -....[Thread state: RUNNABLE]........................................................................ - 59.8% 75.8% org.rocksdb.RocksDB.put - 16.5% 20.9% org.rocksdb.RocksDB.get - 0.7% 0.9% java.io.UnixFileSystem.delete0 - 0.7% 0.9% org.rocksdb.RocksDB.disposeInternal - 0.3% 0.4% java.lang.Long.formatUnsignedLong0 - 0.1% 0.2% org.apache.hadoop.ozone.genesis.BenchMarkRocksDbStore.test - 0.1% 0.1% java.lang.Long.toUnsignedString0 - 0.1% 0.1% org.apache.hadoop.ozone.genesis.generated.BenchMarkRocksDbStore_test_jmhTest.test_thrpt_jmhStub - 0.0% 0.1% java.lang.Object.clone - 0.0% 0.0% java.lang.Thread.currentThread - 0.4% 0.5% - -....[Thread state: TIMED_WAITING]................................................................... - 20.0% 100.0% java.lang.Object.wait - -....[Thread state: WAITING]......................................................................... - 1.1% 100.0% jdk.internal.misc.Unsafe.park - - - -# Run complete. Total time: 00:00:38 - -Benchmark (backgroundThreads) (blockSize) (maxBackgroundFlushes) (maxBytesForLevelBase) (maxOpenFiles) (maxWriteBufferNumber) (writeBufferSize) Mode Cnt Score Error Units -BenchMarkRocksDbStore.test 4 8 4 512 5000 16 64 thrpt 20 167433.864 ? 43530.883 ops/s -BenchMarkRocksDbStore.test:?stack 4 8 4 512 5000 16 64 thrpt NaN --- -``` +``` \ No newline at end of file diff --git a/hadoop-hdds/docs/content/tools/_index.md b/hadoop-hdds/docs/content/tools/_index.md index 090ba357b4b3..12dd7f4faa13 100644 --- a/hadoop-hdds/docs/content/tools/_index.md +++ b/hadoop-hdds/docs/content/tools/_index.md @@ -62,6 +62,5 @@ Admin commands: Test tools: * **freon** - Runs the ozone load generator. - * **genesis** - Developer Only, Ozone micro-benchmark application. For more information see the following subpages: \ No newline at end of file diff --git a/hadoop-hdds/docs/content/tools/_index.zh.md b/hadoop-hdds/docs/content/tools/_index.zh.md index 43f4587e4749..a8e91427193e 100644 --- a/hadoop-hdds/docs/content/tools/_index.zh.md +++ b/hadoop-hdds/docs/content/tools/_index.zh.md @@ -57,6 +57,5 @@ Ozone 有一系列管理 Ozone 的命令行工具。 测试工具: * **freon** - 运行 Ozone 负载生成器。 - * **genesis** - Ozone 的 benchmark 应用,仅供开发者使用。 更多信息请参见下面的子页面: \ No newline at end of file diff --git a/hadoop-hdds/server-scm/pom.xml b/hadoop-hdds/server-scm/pom.xml index fbbb9cd46af8..0b5071a85851 100644 --- a/hadoop-hdds/server-scm/pom.xml +++ b/hadoop-hdds/server-scm/pom.xml @@ -106,11 +106,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> assertj-core test - - org.openjdk.jmh - jmh-generator-annprocess - test - org.mockito mockito-core diff --git a/hadoop-ozone/dev-support/checks/coverage.sh b/hadoop-ozone/dev-support/checks/coverage.sh index 75d6126483e7..dee0db9e1256 100755 --- a/hadoop-ozone/dev-support/checks/coverage.sh +++ b/hadoop-ozone/dev-support/checks/coverage.sh @@ -51,7 +51,6 @@ find target/coverage-classes -name proto -type d | xargs rm -rf find target/coverage-classes -name generated -type d | xargs rm -rf find target/coverage-classes -name v1 -type d | xargs rm -rf find target/coverage-classes -name freon -type d | xargs rm -rf -find target/coverage-classes -name genesis -type d | xargs rm -rf #generate the reports jacoco report "$REPORT_DIR/jacoco-all.exec" --classfiles target/coverage-classes --html "$REPORT_DIR/all" --xml "$REPORT_DIR/all.xml" diff --git a/hadoop-ozone/dist/src/shell/ozone/ozone b/hadoop-ozone/dist/src/shell/ozone/ozone index 777de10fd9de..3b5ac09a1aa8 100755 --- a/hadoop-ozone/dist/src/shell/ozone/ozone +++ b/hadoop-ozone/dist/src/shell/ozone/ozone @@ -43,7 +43,6 @@ function ozone_usage ozone_add_subcommand "freon" client "runs an ozone data generator" ozone_add_subcommand "fs" client "run a filesystem command on Ozone file system. Equivalent to 'hadoop fs'" ozone_add_subcommand "genconf" client "generate minimally required ozone configs and output to ozone-site.xml in specified path" - ozone_add_subcommand "genesis" client "runs a collection of ozone benchmarks to help with tuning." ozone_add_subcommand "getconf" client "get ozone config values from configuration" ozone_add_subcommand "jmxget" admin "get JMX exported values from NameNode or DataNode." ozone_add_subcommand "om" daemon "Ozone Manager" @@ -133,22 +132,6 @@ function ozonecmd_case OZONE_FREON_OPTS="${OZONE_FREON_OPTS}" OZONE_RUN_ARTIFACT_NAME="ozone-tools" ;; - genesis) - ARTIFACT_LIB_DIR="${OZONE_HOME}/share/ozone/lib/ozone-tools" - mkdir -p "$ARTIFACT_LIB_DIR" - if [[ ! -f "$ARTIFACT_LIB_DIR/jmh-core-1.23.jar" ]]; then - echo "jmh-core jar is missing from $ARTIFACT_LIB_DIR, trying to download from maven central (License: GPL + classpath exception)" - curl -o "$ARTIFACT_LIB_DIR/jmh-core-1.23.jar" https://repo1.maven.org/maven2/org/openjdk/jmh/jmh-core/1.23/jmh-core-1.23.jar - fi - - if [[ ! -f "$ARTIFACT_LIB_DIR/jopt-simple-4.6.jar" ]]; then - echo "jopt jar is missing from $ARTIFACT_LIB_DIR, trying to download from maven central (License: MIT License)" - curl -o "$ARTIFACT_LIB_DIR/jopt-simple-4.6.jar" https://repo1.maven.org/maven2/net/sf/jopt-simple/jopt-simple/4.6/jopt-simple-4.6.jar - fi - - OZONE_CLASSNAME=org.apache.hadoop.ozone.genesis.Genesis - OZONE_RUN_ARTIFACT_NAME="ozone-tools" - ;; getconf) OZONE_CLASSNAME=org.apache.hadoop.ozone.conf.OzoneGetConf; OZONE_RUN_ARTIFACT_NAME="ozone-tools" diff --git a/hadoop-ozone/insight/pom.xml b/hadoop-ozone/insight/pom.xml index e106286610da..826b902ed03c 100644 --- a/hadoop-ozone/insight/pom.xml +++ b/hadoop-ozone/insight/pom.xml @@ -42,7 +42,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> hdds-test-utils test - org.apache.ozone hdds-server-scm diff --git a/hadoop-ozone/integration-test/pom.xml b/hadoop-ozone/integration-test/pom.xml index 366b3a38464c..57960d0c8bf0 100644 --- a/hadoop-ozone/integration-test/pom.xml +++ b/hadoop-ozone/integration-test/pom.xml @@ -123,11 +123,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> junit-platform-launcher test - - org.openjdk.jmh - jmh-generator-annprocess - test - org.mockito mockito-core diff --git a/hadoop-ozone/tools/dev-support/findbugsExcludeFile.xml b/hadoop-ozone/tools/dev-support/findbugsExcludeFile.xml index 76127b7f06f3..d263a069b6d8 100644 --- a/hadoop-ozone/tools/dev-support/findbugsExcludeFile.xml +++ b/hadoop-ozone/tools/dev-support/findbugsExcludeFile.xml @@ -13,10 +13,6 @@ limitations under the License. See accompanying LICENSE file. --> - - - - diff --git a/hadoop-ozone/tools/pom.xml b/hadoop-ozone/tools/pom.xml index f2fc0fc4d259..47501795307b 100644 --- a/hadoop-ozone/tools/pom.xml +++ b/hadoop-ozone/tools/pom.xml @@ -42,11 +42,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> hdds-test-utils test - - - org.apache.ozone - hdds-server-scm - org.apache.ozone hdds-tools @@ -93,11 +88,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd"> javax.activation activation - - org.openjdk.jmh - jmh-generator-annprocess - provided - io.dropwizard.metrics metrics-core diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCBatch.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCBatch.java deleted file mode 100644 index 69120dae7ac6..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCBatch.java +++ /dev/null @@ -1,141 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import java.nio.ByteBuffer; - -import org.apache.commons.lang3.RandomUtils; -import org.apache.hadoop.util.NativeCRC32Wrapper; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Fork; -import org.openjdk.jmh.annotations.Level; -import org.openjdk.jmh.annotations.Measurement; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.Param; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Threads; -import org.openjdk.jmh.annotations.Warmup; -import org.openjdk.jmh.infra.Blackhole; - -import static java.util.concurrent.TimeUnit.MILLISECONDS; - -/** - * Class to benchmark hadoop native CRC implementations in batch node. - * - * The hadoop native libraries must be available to run this test. libhadoop.so - * is not currently bundled with Ozone, so it needs to be obtained from a Hadoop - * build and the test needs to be executed on a compatible OS (ie Linux x86): - * - * ozone --jvmargs -Djava.library.path=/home/sodonnell/native genesis -b - * BenchmarkCRCBatch - */ -public class BenchMarkCRCBatch { - - private static int dataSize = 64 * 1024 * 1024; - - /** - * Benchmark state. - */ - @State(Scope.Thread) - public static class BenchmarkState { - - private final ByteBuffer data = ByteBuffer.allocate(dataSize); - - @Param({"512", "1024", "2048", "4096", "32768", "1048576"}) - private int checksumSize; - - @Param({"nativeCRC32", "nativeCRC32C"}) - private String crcImpl; - - private byte[] checksumBuffer; - private int nativeChecksumType = 1; - - public ByteBuffer data() { - return data; - } - - public int checksumSize() { - return checksumSize; - } - - public String crcImpl() { - return crcImpl; - } - - @edu.umd.cs.findbugs.annotations.SuppressFBWarnings( - value="EI_EXPOSE_REP", - justification="The intent is to expose this variable") - public byte[] checksumBuffer() { - return checksumBuffer; - } - - public int nativeChecksumType() { - return nativeChecksumType; - } - - @Setup(Level.Trial) - public void setUp() { - switch (crcImpl) { - case "nativeCRC32": - if (NativeCRC32Wrapper.isAvailable()) { - nativeChecksumType = NativeCRC32Wrapper.CHECKSUM_CRC32; - checksumBuffer = new byte[4 * dataSize / checksumSize]; - } else { - throw new RuntimeException("Native library is not available"); - } - break; - case "nativeCRC32C": - if (NativeCRC32Wrapper.isAvailable()) { - nativeChecksumType = NativeCRC32Wrapper.CHECKSUM_CRC32C; - checksumBuffer = new byte[4 * dataSize / checksumSize]; - } else { - throw new RuntimeException("Native library is not available"); - } - break; - default: - } - data.put(RandomUtils.nextBytes(data.remaining())); - } - } - - @Benchmark - @Threads(1) - @Warmup(iterations = 3, time = 1000, timeUnit = MILLISECONDS) - @Fork(value = 1, warmups = 0) - @Measurement(iterations = 5, time = 2000, timeUnit = MILLISECONDS) - @BenchmarkMode(Mode.Throughput) - public void runCRCNativeBatch(Blackhole blackhole, BenchmarkState state) { - if (state.crcImpl.equals("nativeCRC32") - || state.crcImpl.equals("nativeCRC32C")) { - NativeCRC32Wrapper.calculateChunkedSumsByteArray( - state.checksumSize, state.nativeChecksumType, state.checksumBuffer, - 0, state.data.array(), 0, state.data.capacity()); - blackhole.consume(state.checksumBuffer); - } else { - throw new RuntimeException("Batch mode not available for " - + state.crcImpl); - } - } - - public static void main(String[] args) throws Exception { - org.openjdk.jmh.Main.main(args); - } -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCStreaming.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCStreaming.java deleted file mode 100644 index 669d858e3d48..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkCRCStreaming.java +++ /dev/null @@ -1,173 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import java.nio.ByteBuffer; - -import org.apache.commons.lang3.RandomUtils; -import org.apache.hadoop.ozone.common.ChecksumByteBuffer; -import org.apache.hadoop.ozone.common.ChecksumByteBufferFactory; -import org.apache.hadoop.ozone.common.ChecksumByteBufferImpl; -import org.apache.hadoop.ozone.common.NativeCheckSumCRC32; -import org.apache.hadoop.ozone.common.PureJavaCrc32ByteBuffer; -import org.apache.hadoop.ozone.common.PureJavaCrc32CByteBuffer; -import org.apache.hadoop.util.NativeCRC32Wrapper; -import org.apache.hadoop.util.PureJavaCrc32; -import org.apache.hadoop.util.PureJavaCrc32C; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.BenchmarkMode; -import org.openjdk.jmh.annotations.Fork; -import org.openjdk.jmh.annotations.Level; -import org.openjdk.jmh.annotations.Measurement; -import org.openjdk.jmh.annotations.Mode; -import org.openjdk.jmh.annotations.Param; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.Threads; -import org.openjdk.jmh.annotations.Warmup; -import org.openjdk.jmh.infra.Blackhole; - -import java.util.zip.CRC32; - -import static java.util.concurrent.TimeUnit.MILLISECONDS; - -/** - * Class to benchmark various CRC implementations. This can be executed via - * - * ozone genesis -b BenchmarkCRC - * - * However there are some points to keep in mind. java.util.zip.CRC32C is not - * available until Java 9, therefore if the JVM has a lower version than 9, that - * implementation will not be tested. - * - * The hadoop native libraries will only be tested if libhadoop.so is found on - * the "-Djava.library.path". libhadoop.so is not currently bundled with Ozone, - * so it needs to be obtained from a Hadoop build and the test needs to be - * executed on a compatible OS (ie Linux x86): - * - * ozone --jvmargs -Djava.library.path=/home/sodonnell/native genesis -b - * BenchmarkCRC - */ -public class BenchMarkCRCStreaming { - - private static int dataSize = 64 * 1024 * 1024; - - /** - * Benchmark state. - */ - @State(Scope.Thread) - public static class BenchmarkState { - - private final ByteBuffer data = ByteBuffer.allocate(dataSize); - - @Param({"512", "1024", "2048", "4096", "32768", "1048576"}) - private int checksumSize; - - @Param({"pureCRC32", "pureCRC32C", "hadoopCRC32C", "hadoopCRC32", - "zipCRC32", "zipCRC32C", "nativeCRC32", "nativeCRC32C"}) - private String crcImpl; - - private ChecksumByteBuffer checksum; - - public ChecksumByteBuffer checksum() { - return checksum; - } - - public String crcImpl() { - return crcImpl; - } - - public int checksumSize() { - return checksumSize; - } - - @Setup(Level.Trial) - public void setUp() { - switch (crcImpl) { - case "pureCRC32": - checksum = new PureJavaCrc32ByteBuffer(); - break; - case "pureCRC32C": - checksum = new PureJavaCrc32CByteBuffer(); - break; - case "hadoopCRC32": - checksum = new ChecksumByteBufferImpl(new PureJavaCrc32()); - break; - case "hadoopCRC32C": - checksum = new ChecksumByteBufferImpl(new PureJavaCrc32C()); - break; - case "zipCRC32": - checksum = new ChecksumByteBufferImpl(new CRC32()); - break; - case "zipCRC32C": - try { - checksum = new ChecksumByteBufferImpl( - ChecksumByteBufferFactory.Java9Crc32CFactory.createChecksum()); - } catch (Throwable e) { - throw new RuntimeException("zipCRC32C is not available pre Java 9"); - } - break; - case "nativeCRC32": - if (NativeCRC32Wrapper.isAvailable()) { - checksum = new ChecksumByteBufferImpl(new NativeCheckSumCRC32( - NativeCRC32Wrapper.CHECKSUM_CRC32, checksumSize)); - } else { - throw new RuntimeException("Native library is not available"); - } - break; - case "nativeCRC32C": - if (NativeCRC32Wrapper.isAvailable()) { - checksum = new ChecksumByteBufferImpl(new NativeCheckSumCRC32( - NativeCRC32Wrapper.CHECKSUM_CRC32C, checksumSize)); - } else { - throw new RuntimeException("Native library is not available"); - } - break; - default: - } - data.clear(); - data.put(RandomUtils.nextBytes(data.remaining())); - } - } - - @Benchmark - @Threads(1) - @Warmup(iterations = 3, time = 1000, timeUnit = MILLISECONDS) - @Fork(value = 1, warmups = 0) - @Measurement(iterations = 5, time = 2000, timeUnit = MILLISECONDS) - @BenchmarkMode(Mode.Throughput) - public void runCRC(Blackhole blackhole, BenchmarkState state) { - ByteBuffer data = state.data; - data.clear(); - ChecksumByteBuffer csum = state.checksum; - int bytesPerCheckSum = state.checksumSize; - - for (int i=0; i ids) throws IOException { - Objects.requireNonNull(ids, "ids == null"); - Preconditions.checkArgument(ids.iterator().hasNext()); - List dns = new ArrayList<>(); - ids.forEach(dns::add); - final Pipeline pipeline = Pipeline.newBuilder() - .setState(Pipeline.PipelineState.OPEN) - .setId(PipelineID.randomId()) - .setReplicationConfig( - new StandaloneReplicationConfig(ReplicationFactor.ONE)) - .setNodes(dns) - .build(); - return pipeline; - } - - public static Pipeline createSingleNodePipeline(String containerName) - throws IOException { - return createPipeline(containerName, 1); - } - - /** - * Create a pipeline with single node replica. - * - * @return Pipeline with single node in it. - * @throws IOException - */ - public static Pipeline createPipeline(String containerName, int numNodes) - throws IOException { - Preconditions.checkArgument(numNodes >= 1); - final List ids = new ArrayList<>(numNodes); - for (int i = 0; i < numNodes; i++) { - ids.add(GenesisUtil.createDatanodeDetails(UUID.randomUUID())); - } - return createPipeline(containerName, ids); - } - - @Setup(Level.Trial) - public void initialize() throws IOException { - stateMap = new ContainerStateMap(); - runCount = new AtomicInteger(0); - Pipeline pipeline = createSingleNodePipeline(UUID.randomUUID().toString()); - Preconditions.checkNotNull(pipeline, "Pipeline cannot be null."); - int currentCount = 1; - for (int x = 1; x < 1000; x++) { - try { - ContainerInfo containerInfo = new ContainerInfo.Builder() - .setState(CLOSED) - .setPipelineID(pipeline.getId()) - .setReplicationConfig(pipeline.getReplicationConfig()) - .setUsedBytes(0) - .setNumberOfKeys(0) - .setStateEnterTime(Time.now()) - .setOwner(OzoneConsts.OZONE) - .setContainerID(x) - .setDeleteTransactionId(0) - .build(); - stateMap.addContainer(containerInfo); - currentCount++; - } catch (SCMException e) { - e.printStackTrace(); - } - } - for (int y = currentCount; y < 50000; y++) { - try { - ContainerInfo containerInfo = new ContainerInfo.Builder() - .setState(OPEN) - .setPipelineID(pipeline.getId()) - .setReplicationConfig(pipeline.getReplicationConfig()) - .setUsedBytes(0) - .setNumberOfKeys(0) - .setStateEnterTime(Time.now()) - .setOwner(OzoneConsts.OZONE) - .setContainerID(y) - .setDeleteTransactionId(0) - .build(); - stateMap.addContainer(containerInfo); - currentCount++; - } catch (SCMException e) { - e.printStackTrace(); - } - } - try { - ContainerInfo containerInfo = new ContainerInfo.Builder() - .setState(OPEN) - .setPipelineID(pipeline.getId()) - .setReplicationConfig(pipeline.getReplicationConfig()) - .setUsedBytes(0) - .setNumberOfKeys(0) - .setStateEnterTime(Time.now()) - .setOwner(OzoneConsts.OZONE) - .setContainerID(currentCount++) - .setDeleteTransactionId(0) - .build(); - stateMap.addContainer(containerInfo); - } catch (SCMException e) { - e.printStackTrace(); - } - - containerID = new AtomicInteger(currentCount++); - - } - - @Benchmark - public void createContainerBenchMark(BenchMarkContainerStateMap state, - Blackhole bh) throws IOException { - ContainerInfo containerInfo = getContainerInfo(state); - state.stateMap.addContainer(containerInfo); - } - - private ContainerInfo getContainerInfo(BenchMarkContainerStateMap state) - throws IOException { - Pipeline pipeline = createSingleNodePipeline(UUID.randomUUID().toString()); - int cid = state.containerID.incrementAndGet(); - return new ContainerInfo.Builder() - .setState(CLOSED) - .setPipelineID(pipeline.getId()) - .setReplicationConfig(pipeline.getReplicationConfig()) - .setUsedBytes(0) - .setNumberOfKeys(0) - .setStateEnterTime(Time.now()) - .setOwner(OzoneConsts.OZONE) - .setContainerID(cid) - .setDeleteTransactionId(0) - .build(); - } - - @Benchmark - public void getMatchingContainerBenchMark(BenchMarkContainerStateMap state, - Blackhole bh) throws IOException { - if(runCount.incrementAndGet() % errorFrequency == 0) { - state.stateMap.addContainer(getContainerInfo(state)); - } - bh.consume(state.stateMap - .getMatchingContainerIDs(OPEN, OzoneConsts.OZONE, - ReplicationConfig.fromProtoTypeAndFactor( - ReplicationType.STAND_ALONE, ReplicationFactor.ONE))); - } -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java deleted file mode 100644 index c00e27effc44..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java +++ /dev/null @@ -1,339 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - *

- * http://www.apache.org/licenses/LICENSE-2.0 - *

- * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import java.io.File; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.Random; -import java.util.UUID; -import java.util.concurrent.atomic.AtomicInteger; - -import org.apache.hadoop.hdds.HddsUtils; -import org.apache.hadoop.hdds.client.BlockID; -import org.apache.hadoop.hdds.conf.OzoneConfiguration; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.GetBlockRequestProto; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.PutBlockRequestProto; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkRequestProto; -import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.WriteChunkRequestProto; -import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics; -import org.apache.hadoop.ozone.container.common.impl.ContainerSet; -import org.apache.hadoop.ozone.container.common.impl.HddsDispatcher; -import org.apache.hadoop.ozone.container.common.interfaces.Handler; -import org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.DatanodeStates; -import org.apache.hadoop.ozone.container.common.statemachine.StateContext; -import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet; - -import com.google.common.collect.Maps; -import org.apache.commons.codec.digest.DigestUtils; -import org.apache.commons.io.FileUtils; -import org.apache.commons.lang3.RandomStringUtils; -import org.apache.commons.lang3.RandomUtils; -import org.apache.hadoop.ozone.container.common.volume.StorageVolume; -import org.apache.ratis.thirdparty.com.google.protobuf.ByteString; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Level; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.TearDown; - -/** - * Benchmarks DatanodeDispatcher class. - */ -@State(Scope.Benchmark) -public class BenchMarkDatanodeDispatcher { - - private String baseDir; - private String datanodeUuid; - private HddsDispatcher dispatcher; - private ByteString data; - private Random random; - private AtomicInteger containerCount; - private AtomicInteger keyCount; - private AtomicInteger chunkCount; - - private static final int INIT_CONTAINERS = 100; - private static final int INIT_KEYS = 50; - private static final int INIT_CHUNKS = 100; - public static final int CHUNK_SIZE = 1048576; - - private List containers; - private List keys; - private List chunks; - private MutableVolumeSet volumeSet; - - @Setup(Level.Trial) - public void initialize() throws IOException { - datanodeUuid = UUID.randomUUID().toString(); - - // 1 MB of data - data = ByteString.copyFromUtf8(RandomStringUtils.randomAscii(CHUNK_SIZE)); - random = new Random(); - OzoneConfiguration conf = new OzoneConfiguration(); - baseDir = System.getProperty("java.io.tmpdir") + File.separator + - datanodeUuid; - - // data directory - conf.set("dfs.datanode.data.dir", baseDir + File.separator + "data"); - - //We need 100 * container size minimum space - conf.set("ozone.scm.container.size", "10MB"); - - ContainerSet containerSet = new ContainerSet(); - volumeSet = new MutableVolumeSet(datanodeUuid, conf, null, - StorageVolume.VolumeType.DATA_VOLUME, null); - StateContext context = new StateContext( - conf, DatanodeStates.RUNNING, null); - ContainerMetrics metrics = ContainerMetrics.create(conf); - Map handlers = Maps.newHashMap(); - for (ContainerProtos.ContainerType containerType : - ContainerProtos.ContainerType.values()) { - Handler handler = Handler.getHandlerForContainerType( - containerType, conf, "datanodeid", - containerSet, volumeSet, metrics, - c -> {}); - handler.setClusterID("scm"); - handlers.put(containerType, handler); - } - dispatcher = new HddsDispatcher(conf, containerSet, volumeSet, handlers, - context, metrics, null); - dispatcher.init(); - - containerCount = new AtomicInteger(); - keyCount = new AtomicInteger(); - chunkCount = new AtomicInteger(); - - containers = new ArrayList<>(); - keys = new ArrayList<>(); - chunks = new ArrayList<>(); - - // Create containers - for (int x = 0; x < INIT_CONTAINERS; x++) { - long containerID = HddsUtils.getTime() + x; - ContainerCommandRequestProto req = getCreateContainerCommand(containerID); - dispatcher.dispatch(req, null); - containers.add(containerID); - containerCount.getAndIncrement(); - } - - for (int x = 0; x < INIT_KEYS; x++) { - keys.add(HddsUtils.getTime()+x); - } - - for (int x = 0; x < INIT_CHUNKS; x++) { - chunks.add("chunk-" + x); - } - - // Add chunk and keys to the containers - for (int x = 0; x < INIT_KEYS; x++) { - String chunkName = chunks.get(x); - chunkCount.getAndIncrement(); - long key = keys.get(x); - keyCount.getAndIncrement(); - for (int y = 0; y < INIT_CONTAINERS; y++) { - long containerID = containers.get(y); - BlockID blockID = new BlockID(containerID, key); - dispatcher - .dispatch(getPutBlockCommand(blockID, chunkName), null); - dispatcher.dispatch(getWriteChunkCommand(blockID, chunkName), null); - } - } - } - - @TearDown(Level.Trial) - public void cleanup() throws IOException { - volumeSet.shutdown(); - FileUtils.deleteDirectory(new File(baseDir)); - } - - private ContainerCommandRequestProto getCreateContainerCommand( - long containerID) { - ContainerCommandRequestProto.Builder request = - ContainerCommandRequestProto.newBuilder(); - request.setCmdType(ContainerProtos.Type.CreateContainer); - request.setContainerID(containerID); - request.setCreateContainer( - ContainerProtos.CreateContainerRequestProto.getDefaultInstance()); - request.setDatanodeUuid(datanodeUuid); - request.setTraceID(containerID + "-trace"); - return request.build(); - } - - private ContainerCommandRequestProto getWriteChunkCommand( - BlockID blockID, String chunkName) { - WriteChunkRequestProto.Builder writeChunkRequest = WriteChunkRequestProto - .newBuilder() - .setBlockID(blockID.getDatanodeBlockIDProtobuf()) - .setChunkData(getChunkInfo(blockID, chunkName)) - .setData(data); - - ContainerCommandRequestProto.Builder request = ContainerCommandRequestProto - .newBuilder(); - request.setCmdType(ContainerProtos.Type.WriteChunk) - .setContainerID(blockID.getContainerID()) - .setTraceID(getBlockTraceID(blockID)) - .setDatanodeUuid(datanodeUuid) - .setWriteChunk(writeChunkRequest); - return request.build(); - } - - private ContainerCommandRequestProto getReadChunkCommand( - BlockID blockID, String chunkName) { - ReadChunkRequestProto.Builder readChunkRequest = ReadChunkRequestProto - .newBuilder() - .setBlockID(blockID.getDatanodeBlockIDProtobuf()) - .setChunkData(getChunkInfo(blockID, chunkName)) - .setReadChunkVersion(ContainerProtos.ReadChunkVersion.V1); - - ContainerCommandRequestProto.Builder request = ContainerCommandRequestProto - .newBuilder(); - request.setCmdType(ContainerProtos.Type.ReadChunk) - .setContainerID(blockID.getContainerID()) - .setTraceID(getBlockTraceID(blockID)) - .setDatanodeUuid(datanodeUuid) - .setReadChunk(readChunkRequest); - return request.build(); - } - - private ContainerProtos.ChunkInfo getChunkInfo( - BlockID blockID, String chunkName) { - ContainerProtos.ChunkInfo.Builder builder = - ChunkInfo.newBuilder() - .setChunkName( - DigestUtils.md5Hex(chunkName) - + "_stream_" + blockID.getContainerID() + "_block_" - + blockID.getLocalID()) - .setChecksumData( - ChecksumData.newBuilder() - .setBytesPerChecksum(4) - .setType(ChecksumType.CRC32) - .build()) - .setOffset(0).setLen(data.size()); - return builder.build(); - } - - private ContainerCommandRequestProto getPutBlockCommand( - BlockID blockID, String chunkKey) { - PutBlockRequestProto.Builder putBlockRequest = PutBlockRequestProto - .newBuilder() - .setBlockData(getBlockData(blockID, chunkKey)); - - ContainerCommandRequestProto.Builder request = ContainerCommandRequestProto - .newBuilder(); - request.setCmdType(ContainerProtos.Type.PutBlock) - .setContainerID(blockID.getContainerID()) - .setTraceID(getBlockTraceID(blockID)) - .setDatanodeUuid(datanodeUuid) - .setPutBlock(putBlockRequest); - return request.build(); - } - - private ContainerCommandRequestProto getGetBlockCommand(BlockID blockID) { - GetBlockRequestProto.Builder readBlockRequest = - GetBlockRequestProto.newBuilder() - .setBlockID(blockID.getDatanodeBlockIDProtobuf()); - ContainerCommandRequestProto.Builder request = ContainerCommandRequestProto - .newBuilder() - .setCmdType(ContainerProtos.Type.GetBlock) - .setContainerID(blockID.getContainerID()) - .setTraceID(getBlockTraceID(blockID)) - .setDatanodeUuid(datanodeUuid) - .setGetBlock(readBlockRequest); - return request.build(); - } - - private ContainerProtos.BlockData getBlockData( - BlockID blockID, String chunkKey) { - ContainerProtos.BlockData.Builder builder = ContainerProtos.BlockData - .newBuilder() - .setBlockID(blockID.getDatanodeBlockIDProtobuf()) - .addChunks(getChunkInfo(blockID, chunkKey)); - return builder.build(); - } - - @Benchmark - public void createContainer(BenchMarkDatanodeDispatcher bmdd) { - long containerID = RandomUtils.nextLong(); - ContainerCommandRequestProto req = getCreateContainerCommand(containerID); - bmdd.dispatcher.dispatch(req, null); - bmdd.containers.add(containerID); - bmdd.containerCount.getAndIncrement(); - } - - @Benchmark - public void writeChunk(BenchMarkDatanodeDispatcher bmdd) { - bmdd.dispatcher.dispatch(getWriteChunkCommand( - getRandomBlockID(), getNewChunkToWrite()), null); - } - - @Benchmark - public void readChunk(BenchMarkDatanodeDispatcher bmdd) { - BlockID blockID = getRandomBlockID(); - String chunkKey = getRandomChunkToRead(); - bmdd.dispatcher.dispatch(getReadChunkCommand(blockID, chunkKey), null); - } - - @Benchmark - public void putBlock(BenchMarkDatanodeDispatcher bmdd) { - BlockID blockID = getRandomBlockID(); - String chunkKey = getNewChunkToWrite(); - bmdd.dispatcher.dispatch(getPutBlockCommand(blockID, chunkKey), null); - } - - @Benchmark - public void getBlock(BenchMarkDatanodeDispatcher bmdd) { - BlockID blockID = getRandomBlockID(); - bmdd.dispatcher.dispatch(getGetBlockCommand(blockID), null); - } - - // Chunks writes from benchmark only reaches certain containers - // Use INIT_CHUNKS instead of updated counters to guarantee - // key/chunks are readable. - - private BlockID getRandomBlockID() { - return new BlockID(getRandomContainerID(), getRandomKeyID()); - } - - private long getRandomContainerID() { - return containers.get(random.nextInt(INIT_CONTAINERS)); - } - - private long getRandomKeyID() { - return keys.get(random.nextInt(INIT_KEYS)); - } - - private String getRandomChunkToRead() { - return chunks.get(random.nextInt(INIT_CHUNKS)); - } - - private String getNewChunkToWrite() { - return "chunk-" + chunkCount.getAndIncrement(); - } - - private String getBlockTraceID(BlockID blockID) { - return blockID.getContainerID() + "-" + blockID.getLocalID() +"-trace"; - } -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java deleted file mode 100644 index a7e8f82f92c4..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java +++ /dev/null @@ -1,126 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with this - * work for additional information regarding copyright ownership. The ASF - * licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations under - * the License. - * - */ - -package org.apache.hadoop.ozone.genesis; - -import java.io.File; -import java.io.IOException; -import java.util.concurrent.locks.ReentrantLock; - -import org.apache.hadoop.fs.FileUtil; -import org.apache.hadoop.hdds.HddsConfigKeys; -import org.apache.hadoop.hdds.client.RatisReplicationConfig; -import org.apache.hadoop.hdds.conf.OzoneConfiguration; -import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor; -import org.apache.hadoop.hdds.scm.block.BlockManager; -import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList; -import org.apache.hadoop.hdds.scm.events.SCMEvents; -import org.apache.hadoop.hdds.scm.pipeline.Pipeline; -import org.apache.hadoop.hdds.scm.pipeline.PipelineManager; -import org.apache.hadoop.hdds.scm.safemode.SCMSafeModeManager; -import org.apache.hadoop.hdds.scm.server.SCMConfigurator; -import org.apache.hadoop.hdds.scm.server.StorageContainerManager; - -import org.apache.commons.lang3.RandomStringUtils; -import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Level; -import org.openjdk.jmh.annotations.Param; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.TearDown; -import org.openjdk.jmh.annotations.Threads; -import org.openjdk.jmh.infra.Blackhole; - -/** - * Benchmarks BlockManager class. - */ -@State(Scope.Thread) -public class BenchMarkSCM { - - private static String testDir; - private static StorageContainerManager scm; - private static BlockManager blockManager; - private static ReentrantLock lock = new ReentrantLock(); - - @Param({ "1", "10", "100", "1000", "10000", "100000" }) - private static int numPipelines; - @Param({ "3", "10", "100" }) - private static int numContainersPerPipeline; - - @Setup(Level.Trial) - public static void initialize() - throws Exception { - try { - lock.lock(); - if (scm == null) { - OzoneConfiguration conf = new OzoneConfiguration(); - testDir = GenesisUtil.getTempPath() - .resolve(RandomStringUtils.randomNumeric(7)).toString(); - conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, testDir); - - GenesisUtil.configureSCM(conf, 10); - conf.setInt(OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT, - numContainersPerPipeline); - GenesisUtil.addPipelines(ReplicationFactor.THREE, numPipelines, conf); - - scm = GenesisUtil.getScm(conf, new SCMConfigurator()); - scm.start(); - blockManager = scm.getScmBlockManager(); - - // prepare SCM - PipelineManager pipelineManager = scm.getPipelineManager(); - for (Pipeline pipeline : pipelineManager - .getPipelines( - new RatisReplicationConfig(ReplicationFactor.THREE))) { - pipelineManager.openPipeline(pipeline.getId()); - } - scm.getEventQueue().fireEvent(SCMEvents.SAFE_MODE_STATUS, - new SCMSafeModeManager.SafeModeStatus(false, false)); - Thread.sleep(1000); - } - } finally { - lock.unlock(); - } - } - - @TearDown(Level.Trial) - public static void tearDown() { - try { - lock.lock(); - if (scm != null) { - scm.stop(); - scm.join(); - scm = null; - FileUtil.fullyDelete(new File(testDir)); - } - } finally { - lock.unlock(); - } - } - - @Threads(4) - @Benchmark - public void allocateBlockBenchMark(BenchMarkSCM state, - Blackhole bh) throws IOException { - BenchMarkSCM.blockManager - .allocateBlock(50, new RatisReplicationConfig(ReplicationFactor.THREE), - "Genesis", new ExcludeList()); - } -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkBlockDataToString.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkBlockDataToString.java deleted file mode 100644 index ecb10dbd0229..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkBlockDataToString.java +++ /dev/null @@ -1,166 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - *

- * http://www.apache.org/licenses/LICENSE-2.0 - *

- * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import com.google.common.base.Preconditions; -import org.apache.commons.lang3.builder.ToStringBuilder; -import org.apache.commons.lang3.builder.ToStringStyle; -import org.apache.hadoop.hdds.client.BlockID; -import org.apache.hadoop.hdds.client.ContainerBlockID; -import org.apache.hadoop.ozone.container.common.helpers.BlockData; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Param; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.infra.Blackhole; - -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.ThreadLocalRandom; - -/** - * Benchmarks various implementations of {@link BlockData#toString}. - */ -@State(Scope.Benchmark) -public class BenchmarkBlockDataToString { - - @Param("1000") - private int count; - - @Param({"112"}) - private int capacity; - - private List data; - private List values; - - @Setup - public void createData() { - ThreadLocalRandom rnd = ThreadLocalRandom.current(); - data = new ArrayList<>(count); - values = new ArrayList<>(count); - for (int i = 0; i < count; i++) { - BlockID blockID = new BlockID(rnd.nextLong(), rnd.nextLong()); - BlockData item = new BlockData(blockID); - item.setBlockCommitSequenceId(rnd.nextLong()); - data.add(item); - values.add(item.toString()); - } - } - - @Benchmark - public void usingToStringBuilderDefaultCapacity( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - String str = new ToStringBuilder(item, ToStringStyle.NO_CLASS_NAME_STYLE) - .append("blockId", item.getBlockID().toString()) - .append("size", item.getSize()) - .toString(); - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - - @Benchmark - public void usingToStringBuilder( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - String str = new ToStringBuilder(item, ToStringStyle.NO_CLASS_NAME_STYLE, - new StringBuffer(capacity)) - .append("blockId", item.getBlockID().toString()) - .append("size", item.getSize()) - .toString(); - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - - @Benchmark - public void usingSimpleStringBuilder( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - String str = new StringBuilder(capacity) - .append("[") - .append("blockId=") - .append(item.getBlockID()) - .append(",size=") - .append(item.getSize()) - .append("]") - .toString(); - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - - @Benchmark - public void usingPushDownStringBuilder( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - StringBuilder sb = new StringBuilder(capacity); - item.appendTo(sb); - String str = sb.toString(); - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - - @Benchmark - public void usingConcatenation( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - String str = "[blockId=" + - item.getBlockID() + - ",size=" + - item.getSize() + - "]"; - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - - @Benchmark - public void usingInlineStringBuilder( - BenchmarkBlockDataToString state, Blackhole sink) { - for (int i = 0; i < state.count; i++) { - BlockData item = state.data.get(i); - BlockID blockID = item.getBlockID(); - ContainerBlockID containerBlockID = blockID.getContainerBlockID(); - String str = new StringBuilder(capacity) - .append("[") - .append("blockId=") - .append("conID: ") - .append(containerBlockID.getContainerID()) - .append(" locID: ") - .append(containerBlockID.getLocalID()) - .append(" bcsId: ") - .append(blockID.getBlockCommitSequenceId()) - .append(",size=") - .append(item.getSize()) - .append("]") - .toString(); - sink.consume(str); - Preconditions.checkArgument(str.equals(state.values.get(i))); - } - } - -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkChunkManager.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkChunkManager.java deleted file mode 100644 index c3299e395f8d..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchmarkChunkManager.java +++ /dev/null @@ -1,180 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - *

- * http://www.apache.org/licenses/LICENSE-2.0 - *

- * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import java.io.File; -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.file.Files; -import java.util.UUID; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicLong; - -import org.apache.hadoop.hdds.client.BlockID; -import org.apache.hadoop.hdds.conf.OzoneConfiguration; -import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException; -import org.apache.hadoop.ozone.OzoneConsts; -import org.apache.hadoop.ozone.common.ChunkBuffer; -import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo; -import org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion; -import org.apache.hadoop.ozone.container.common.transport.server.ratis.DispatcherContext; -import org.apache.hadoop.ozone.container.common.volume.HddsVolume; -import org.apache.hadoop.ozone.container.common.volume.ImmutableVolumeSet; -import org.apache.hadoop.ozone.container.common.volume.VolumeSet; -import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer; -import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData; -import org.apache.hadoop.ozone.container.keyvalue.impl.FilePerBlockStrategy; -import org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy; -import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager; - -import static java.nio.charset.StandardCharsets.UTF_8; -import org.apache.commons.io.FileUtils; -import static org.apache.commons.lang3.RandomStringUtils.randomAlphanumeric; -import static org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion.FILE_PER_BLOCK; -import static org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion.FILE_PER_CHUNK; -import org.openjdk.jmh.annotations.Benchmark; -import org.openjdk.jmh.annotations.Level; -import org.openjdk.jmh.annotations.Measurement; -import org.openjdk.jmh.annotations.Param; -import org.openjdk.jmh.annotations.Scope; -import org.openjdk.jmh.annotations.Setup; -import org.openjdk.jmh.annotations.State; -import org.openjdk.jmh.annotations.TearDown; -import org.openjdk.jmh.annotations.Warmup; -import org.openjdk.jmh.infra.Blackhole; - -/** - * Benchmark for ChunkManager implementations. - */ -@Warmup(time = 1, timeUnit = TimeUnit.SECONDS) -@Measurement(time = 1, timeUnit = TimeUnit.SECONDS) -public class BenchmarkChunkManager { - - private static final String DEFAULT_TEST_DATA_DIR = - "target" + File.separator + "test" + File.separator + "data"; - - private static final AtomicLong CONTAINER_COUNTER = new AtomicLong(); - - private static final DispatcherContext WRITE_STAGE = - new DispatcherContext.Builder() - .setStage(DispatcherContext.WriteChunkStage.WRITE_DATA) - .build(); - - private static final DispatcherContext COMMIT_STAGE = - new DispatcherContext.Builder() - .setStage(DispatcherContext.WriteChunkStage.COMMIT_DATA) - .build(); - - private static final long CONTAINER_SIZE = OzoneConsts.GB; - private static final long BLOCK_SIZE = 256 * OzoneConsts.MB; - - private static final String SCM_ID = UUID.randomUUID().toString(); - private static final String DATANODE_ID = UUID.randomUUID().toString(); - - /** - * State for the benchmark. - */ - @State(Scope.Benchmark) - public static class BenchmarkState { - - @Param({"1048576", "4194304", "16777216", "67108864"}) - private int chunkSize; - - private File dir; - private ChunkBuffer buffer; - private VolumeSet volumeSet; - private OzoneConfiguration config; - - private static File getTestDir() throws IOException { - File dir = new File(DEFAULT_TEST_DATA_DIR).getAbsoluteFile(); - Files.createDirectories(dir.toPath()); - return dir; - } - - @Setup(Level.Iteration) - public void setup() throws IOException { - dir = getTestDir(); - config = new OzoneConfiguration(); - HddsVolume volume = new HddsVolume.Builder(dir.getAbsolutePath()) - .conf(config) - .datanodeUuid(DATANODE_ID) - .build(); - - volumeSet = new ImmutableVolumeSet(volume); - - byte[] arr = randomAlphanumeric(chunkSize).getBytes(UTF_8); - buffer = ChunkBuffer.wrap(ByteBuffer.wrap(arr)); - } - - @TearDown(Level.Iteration) - public void cleanup() { - FileUtils.deleteQuietly(dir); - } - } - - @Benchmark - public void writeMultipleFiles(BenchmarkState state, Blackhole sink) - throws StorageContainerException { - - ChunkManager chunkManager = new FilePerChunkStrategy(true, null, null); - benchmark(chunkManager, FILE_PER_CHUNK, state, sink); - } - - @Benchmark - public void writeSingleFile(BenchmarkState state, Blackhole sink) - throws StorageContainerException { - - ChunkManager chunkManager = new FilePerBlockStrategy(true, null, null); - benchmark(chunkManager, FILE_PER_BLOCK, state, sink); - } - - private void benchmark(ChunkManager subject, ChunkLayOutVersion layout, - BenchmarkState state, Blackhole sink) - throws StorageContainerException { - - final long containerID = CONTAINER_COUNTER.getAndIncrement(); - - KeyValueContainerData containerData = - new KeyValueContainerData(containerID, layout, - CONTAINER_SIZE, UUID.randomUUID().toString(), - DATANODE_ID); - KeyValueContainer container = - new KeyValueContainer(containerData, state.config); - container.create(state.volumeSet, (volumes, any) -> volumes.get(0), SCM_ID); - - final long blockCount = CONTAINER_SIZE / BLOCK_SIZE; - final long chunkCount = BLOCK_SIZE / state.chunkSize; - - for (long b = 0; b < blockCount; b++) { - final BlockID blockID = new BlockID(containerID, b); - - for (long c = 0; c < chunkCount; c++) { - final String chunkName = String.format("block.%d.chunk.%d", b, c); - final long offset = c * state.chunkSize; - ChunkInfo chunkInfo = new ChunkInfo(chunkName, offset, state.chunkSize); - ChunkBuffer data = state.buffer.duplicate(0, state.chunkSize); - - subject.writeChunk(container, blockID, chunkInfo, data, WRITE_STAGE); - subject.writeChunk(container, blockID, chunkInfo, data, COMMIT_STAGE); - - sink.consume(chunkInfo); - } - } - } - -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java deleted file mode 100644 index 77da882a27a1..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/Genesis.java +++ /dev/null @@ -1,107 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with this - * work for additional information regarding copyright ownership. The ASF - * licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations under - * the License. - * - */ - -package org.apache.hadoop.ozone.genesis; - -import org.openjdk.jmh.profile.GCProfiler; -import org.openjdk.jmh.profile.StackProfiler; -import org.openjdk.jmh.runner.Runner; -import org.openjdk.jmh.runner.RunnerException; -import org.openjdk.jmh.runner.options.OptionsBuilder; -import picocli.CommandLine; -import picocli.CommandLine.Option; -import picocli.CommandLine.Command; - -import static org.openjdk.jmh.runner.options.TimeValue.seconds; - -/** - * Main class that executes a set of HDDS/Ozone benchmarks. - * We purposefully don't use the runner and tools classes from Hadoop. - * There are some name collisions with OpenJDK JMH package. - *

- * Hence, these classes do not use the Tool/Runner pattern of standard Hadoop - * CLI. - */ -@Command(name = "ozone genesis", - description = "Tool for running ozone benchmarks", - mixinStandardHelpOptions = true) -public final class Genesis { - - // After adding benchmark in genesis package add the benchmark name in the - // description for this option. - @Option(names = {"-b", "-benchmark", "--benchmark"}, - split = ",", description = - "Option used for specifying benchmarks to run.\n" - + "Ex. ozone genesis -benchmark BenchMarkContainerStateMap," - + "Possible benchmarks which can be used are " - + "{BenchMarkContainerStateMap, " - + "BenchMarkOMClient, " - + "BenchMarkSCM, BenchMarkMetadataStoreReads, " - + "BenchMarkMetadataStoreWrites, BenchMarkDatanodeDispatcher, " - + "BenchMarkRocksDbStore, BenchMarkCRCStreaming, BenchMarkCRCBatch}") - private static String[] benchmarks; - - @Option(names = "-t", defaultValue = "4", - description = "Number of threads to use for the benchmark.\n" - + "This option can be overridden by threads mentioned in benchmark.") - private static int numThreads; - - @Option(names = "--seconds", - description = "Number of seconds to run each benchmark method.\n" - + "By default no limit is set.") - private static int seconds = -1; - - private Genesis() { - } - - public static void main(String[] args) throws RunnerException { - CommandLine commandLine = new CommandLine(new Genesis()); - commandLine.parse(args); - if (commandLine.isUsageHelpRequested()) { - commandLine.usage(System.out); - return; - } - - OptionsBuilder optionsBuilder = new OptionsBuilder(); - if (benchmarks != null) { - // The OptionsBuilder#include takes a regular expression as argument. - // Therefore it is important to keep the benchmark names unique for - // running a benchmark. For example if there are two benchmarks - - // BenchMarkOM and BenchMarkOMClient and we include BenchMarkOM then - // both the benchmarks will be run. - for (String benchmark : benchmarks) { - optionsBuilder.include(benchmark); - } - } - optionsBuilder.warmupIterations(2) - .measurementIterations(20) - .addProfiler(StackProfiler.class) - .addProfiler(GCProfiler.class) - .shouldDoGC(true) - .forks(1) - .threads(numThreads); - - if (seconds > 0) { - optionsBuilder.measurementTime(seconds(seconds)); - } - - new Runner(optionsBuilder.build()).run(); - } -} - - diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisMemoryProfiler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisMemoryProfiler.java deleted file mode 100644 index 8ba19fc1747a..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisMemoryProfiler.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with this - * work for additional information regarding copyright ownership. The ASF - * licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations under - * the License. - * - */ - -package org.apache.hadoop.ozone.genesis; - -import org.apache.hadoop.conf.StorageUnit; -import org.openjdk.jmh.infra.BenchmarkParams; -import org.openjdk.jmh.infra.IterationParams; -import org.openjdk.jmh.profile.InternalProfiler; -import org.openjdk.jmh.results.AggregationPolicy; -import org.openjdk.jmh.results.IterationResult; -import org.openjdk.jmh.results.Result; -import org.openjdk.jmh.results.ScalarResult; - -import java.util.ArrayList; -import java.util.Collection; - -/** - * Max memory profiler. - */ -public class GenesisMemoryProfiler implements InternalProfiler { - @Override - public void beforeIteration(BenchmarkParams benchmarkParams, - IterationParams iterationParams) { - - } - - @Override - public Collection afterIteration(BenchmarkParams - benchmarkParams, IterationParams iterationParams, IterationResult - result) { - long totalHeap = Runtime.getRuntime().totalMemory(); - - Collection samples = new ArrayList<>(); - samples.add(new ScalarResult("Max heap", - StorageUnit.BYTES.toGBs(totalHeap), "GBs", - AggregationPolicy.MAX)); - return samples; - } - - @Override - public String getDescription() { - return "Genesis Memory Profiler. Computes Max Memory used by a test."; - } -} - diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java deleted file mode 100644 index cffb4c4daee5..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/GenesisUtil.java +++ /dev/null @@ -1,162 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - *

- * http://www.apache.org/licenses/LICENSE-2.0 - *

- * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.ozone.genesis; - -import java.io.IOException; -import java.nio.file.Path; -import java.nio.file.Paths; -import java.util.ArrayList; -import java.util.List; -import java.util.Random; -import java.util.UUID; - -import org.apache.hadoop.hdds.client.RatisReplicationConfig; -import org.apache.hadoop.hdds.conf.ConfigurationSource; -import org.apache.hadoop.hdds.conf.OzoneConfiguration; -import org.apache.hadoop.hdds.protocol.DatanodeDetails; -import org.apache.hadoop.hdds.protocol.proto.HddsProtos; -import org.apache.hadoop.hdds.scm.ScmConfigKeys; -import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStore; -import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStoreImpl; -import org.apache.hadoop.hdds.scm.pipeline.Pipeline; -import org.apache.hadoop.hdds.scm.pipeline.PipelineID; -import org.apache.hadoop.hdds.scm.server.SCMConfigurator; -import org.apache.hadoop.hdds.scm.server.SCMStorageConfig; -import org.apache.hadoop.hdds.scm.server.StorageContainerManager; -import org.apache.hadoop.hdds.utils.db.Table; -import org.apache.hadoop.ozone.common.Storage; -import org.apache.hadoop.ozone.om.OMConfigKeys; -import org.apache.hadoop.ozone.om.OMStorage; -import org.apache.hadoop.ozone.om.OzoneManager; -import org.apache.hadoop.security.authentication.client.AuthenticationException; - -/** - * Utility class for benchmark test cases. - */ -public final class GenesisUtil { - - private GenesisUtil() { - // private constructor. - } - - public static final String DEFAULT_TYPE = "default"; - public static final String CACHE_10MB_TYPE = "Cache10MB"; - public static final String CACHE_1GB_TYPE = "Cache1GB"; - public static final String CLOSED_TYPE = "ClosedContainer"; - - private static final int DB_FILE_LEN = 7; - private static final String TMP_DIR = "java.io.tmpdir"; - private static final Random RANDOM = new Random(); - private static final String RANDOM_LOCAL_ADDRESS = "127.0.0.1:0"; - - public static Path getTempPath() { - return Paths.get(System.getProperty(TMP_DIR)); - } - - public static DatanodeDetails createDatanodeDetails(UUID uuid) { - String ipAddress = - RANDOM.nextInt(256) + "." + RANDOM.nextInt(256) + "." + RANDOM - .nextInt(256) + "." + RANDOM.nextInt(256); - - DatanodeDetails.Port containerPort = DatanodeDetails.newPort( - DatanodeDetails.Port.Name.STANDALONE, 0); - DatanodeDetails.Port ratisPort = DatanodeDetails.newPort( - DatanodeDetails.Port.Name.RATIS, 0); - DatanodeDetails.Port restPort = DatanodeDetails.newPort( - DatanodeDetails.Port.Name.REST, 0); - DatanodeDetails.Builder builder = DatanodeDetails.newBuilder(); - builder.setUuid(uuid) - .setHostName("localhost") - .setIpAddress(ipAddress) - .addPort(containerPort) - .addPort(ratisPort) - .addPort(restPort); - return builder.build(); - } - - static StorageContainerManager getScm(OzoneConfiguration conf, - SCMConfigurator configurator) throws IOException, - AuthenticationException { - SCMStorageConfig scmStore = new SCMStorageConfig(conf); - if(scmStore.getState() != Storage.StorageState.INITIALIZED) { - String clusterId = UUID.randomUUID().toString(); - String scmId = UUID.randomUUID().toString(); - scmStore.setClusterId(clusterId); - scmStore.setScmId(scmId); - // writes the version file properties - scmStore.initialize(); - } - return StorageContainerManager.createSCM(conf, configurator); - } - - static void configureSCM(OzoneConfiguration conf, int numHandlers) { - conf.set(ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY, - RANDOM_LOCAL_ADDRESS); - conf.set(ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_ADDRESS_KEY, - RANDOM_LOCAL_ADDRESS); - conf.set(ScmConfigKeys.OZONE_SCM_DATANODE_ADDRESS_KEY, - RANDOM_LOCAL_ADDRESS); - conf.set(ScmConfigKeys.OZONE_SCM_HTTP_ADDRESS_KEY, - RANDOM_LOCAL_ADDRESS); - conf.setInt(ScmConfigKeys.OZONE_SCM_HANDLER_COUNT_KEY, numHandlers); - } - - static void addPipelines(HddsProtos.ReplicationFactor factor, - int numPipelines, ConfigurationSource conf) throws Exception { - SCMMetadataStore scmMetadataStore = - new SCMMetadataStoreImpl((OzoneConfiguration)conf); - - Table pipelineTable = - scmMetadataStore.getPipelineTable(); - List nodes = new ArrayList<>(); - for (int i = 0; i < factor.getNumber(); i++) { - nodes - .add(GenesisUtil.createDatanodeDetails(UUID.randomUUID())); - } - for (int i = 0; i < numPipelines; i++) { - Pipeline pipeline = - Pipeline.newBuilder() - .setState(Pipeline.PipelineState.OPEN) - .setId(PipelineID.randomId()) - .setReplicationConfig(new RatisReplicationConfig(factor)) - .setNodes(nodes) - .build(); - pipelineTable.put(pipeline.getId(), - pipeline); - } - scmMetadataStore.getStore().close(); - } - - static OzoneManager getOm(OzoneConfiguration conf) - throws IOException, AuthenticationException { - OMStorage omStorage = new OMStorage(conf); - SCMStorageConfig scmStore = new SCMStorageConfig(conf); - if (omStorage.getState() != Storage.StorageState.INITIALIZED) { - omStorage.setClusterId(scmStore.getClusterID()); - omStorage.setOmId(UUID.randomUUID().toString()); - omStorage.initialize(); - } - return OzoneManager.createOm(conf); - } - - static void configureOM(OzoneConfiguration conf, int numHandlers) { - conf.set(OMConfigKeys.OZONE_OM_HTTP_ADDRESS_KEY, - RANDOM_LOCAL_ADDRESS); - conf.setInt(OMConfigKeys.OZONE_OM_HANDLER_COUNT_KEY, numHandlers); - } -} diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/package-info.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/package-info.java deleted file mode 100644 index a7c8ee26486a..000000000000 --- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/package-info.java +++ /dev/null @@ -1,25 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with this - * work for additional information regarding copyright ownership. The ASF - * licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations under - * the License. - * - */ - -/** - * Zephyr contains a set of benchmarks for Ozone. This is a command line tool - * that can be run by end users to get a sense of what kind of performance - * the system is capable of; Since Ozone is a new system, these benchmarks - * will allow us to correlate a base line to real world performance. - */ -package org.apache.hadoop.ozone.genesis; \ No newline at end of file diff --git a/pom.xml b/pom.xml index 0759469340b8..cf94d9de117d 100644 --- a/pom.xml +++ b/pom.xml @@ -139,7 +139,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs 1.6.0 0.33.0 - 1.19 2.5.0 @@ -670,13 +669,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs ${hadoop.version} - - org.openjdk.jmh - jmh-generator-annprocess - ${jmh.version} - - - org.apache.hadoop hadoop-kms