Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
994182b
HDDS-8882. Manage status of DeleteBlocksCommand in SCM to avoid sendi…
xichen01 Dec 18, 2023
bdc7924
HDDS-6152. Migrate TestOzoneFileSystem to JUnit5 (#5795)
adoroszlai Dec 19, 2023
7a72e57
HDDS-9966. Bump maven-shade-plugin to 3.5.1 (#5823)
dependabot[bot] Dec 19, 2023
1999ba7
HDDS-9807. Consider volume committed space when checking if datanode …
vtutrinov Dec 20, 2023
d301b97
HDDS-9922. Migrate TestOzoneFileInterfaces to JUnit5 (#5838)
adoroszlai Dec 22, 2023
a78f1e6
HDDS-10027. NPE in VolumeInfoMetrics.getCommitted() (#5885)
adoroszlai Dec 29, 2023
3563c82
HDDS-10007. Rename ManagedSstFileReader in rocksdb-checkpoint-differ …
adoroszlai Dec 29, 2023
18b9ccf
HDDS-9959. Propagate group remove to other datanodes during pipeline …
ivandika3 Jan 4, 2024
82c6118
HDDS-9883. Recon - Improve the performance of processing IncrementalC…
devmadhuu Jan 4, 2024
dc3473e
HDDS-10046. Replace PrecomputedVolumeSpace with SpaceUsageSource.Fixe…
adoroszlai Jan 5, 2024
b7659bd
HDDS-8982. Log flooded by WritableRatisContainerProvider if pipeline'…
adoroszlai Jan 5, 2024
08bf39f
HDDS-10070. Intermittent failure in TestWritableRatisContainerProvide…
adoroszlai Jan 5, 2024
96d7864
HDDS-8888. Consider Datanode queue capacity when sending DeleteBlocks…
xichen01 Jan 6, 2024
ef9d276
HDDS-10178. Shaded Jar build failure in case-insensitive filesystem (…
adoroszlai Jan 22, 2024
9f7c812
HDDS-10219. Bump frontend-maven-plugin to 1.15.0 (#6104)
dependabot[bot] Jan 26, 2024
2dbe114
HDDS-10225. Speed up TestSCMHAManagerImpl. (#6109)
adoroszlai Jan 30, 2024
fd207b9
HDDS-10029. Improved logs for SCMDeletedBlockTransactionStatusManager…
xichen01 Jan 30, 2024
8ba17b0
HDDS-10246. Remove KeyValueHandler.checkContainerIsHealthy to improve…
whbing Feb 1, 2024
01754f7
HDDS-10262. Encapsulate SnapshotCache inside OmSnapshotManager (#6135)
Cyrill Feb 9, 2024
b0dcb7a
HDDS-10250. Use SnapshotId as key in SnapshotCache (#6139)
hemantk-12 Feb 10, 2024
ae38ec3
HDDS-7810. Support namespace summaries (du, dist & counts) for OBJECT…
ArafatKhan2198 Mar 1, 2024
03c658d
HDDS-10504. Remove unused VolumeInfo#configuredCapacity (#6363)
adoroszlai Mar 11, 2024
5c9e097
HDDS-10505. Move space reservation logic to VolumeUsage (#6370)
adoroszlai Mar 18, 2024
fee68d7
HDDS-5865. Make read retry interval and attempts in BlockInputStream …
SaketaChalamchala Mar 21, 2024
6eb81f3
HDDS-9534. Support namespace summaries (du, dist & counts) for LEGACY…
ArafatKhan2198 Mar 29, 2024
c01dd83
HDDS-10206. Expose jmx metrics for snapshot cache size on the ozone m…
ceekay47 Apr 5, 2024
bb4f066
HDDS-10452. Improve Recon Disk Usage to fetch and display Top N recor…
ArafatKhan2198 Apr 16, 2024
9fc80a3
HDDS-10156. Optimize Snapshot Cache get and eviction (#6024)
swamirishi Apr 17, 2024
84e768e
HDDS-10652. EC Reconstruction fails with "IOException: None of the bl…
siddhantsangwan Apr 18, 2024
392dc66
HDDS-10614. Avoid decreasing cached space usage below zero (#6508)
ArafatKhan2198 Apr 18, 2024
427f901
HDDS-10783. Close SstFileReaderIterator in RocksDBCheckpointDiffer (#…
hemantk-12 May 1, 2024
e40d628
HDDS-10784. Multipart upload to encrypted bucket fails with ClassCast…
adoroszlai May 2, 2024
42faa4f
HDDS-10792. Bump Netty to 4.1.109.Final (#6622)
rohit-kb May 3, 2024
95e55a8
HDDS-10720. Datanode volume DU reserved percent should have a non-zer…
errose28 May 3, 2024
e1e13e9
HDDS-10787. Updated rocksdb-checkpoint-differ to use managed RocksDB …
hemantk-12 May 3, 2024
eec54ed
HDDS-10806. Bump Bouncy Castle to 1.78.1 (#6632)
dependabot[bot] May 4, 2024
0690cff
HDDS-10803. HttpServer fails to start with wildcard principal (#6631)
adoroszlai May 4, 2024
e57c906
HDDS-10815. Bump Spring Framework to 5.3.34 (#6643)
rohit-kb May 6, 2024
08e470e
HDDS-10834. Revert snapshot diff output change added in HDDS-9360 (#6…
hemantk-12 May 9, 2024
8792c9a
HDDS-10608. Recon can't get full key when using Recon API. (#6492)
ArafatKhan2198 May 9, 2024
a747f10
HDDS-10696. Fix test failure caused by empty snapshot installation (#…
hemantk-12 May 9, 2024
1233977
HDDS-10781. Do not use OFSPath in O3FS BasicOzoneClientAdapterImpl (#…
chungen0126 May 9, 2024
10cdd6e
HDDS-10371. NPE in OzoneAclUtils.isOwner (#6676)
adoroszlai May 14, 2024
3df726c
HDDS-10875. XceiverRatisServer#getRaftPeersInPipeline should be calle…
ivandika3 May 21, 2024
daf4548
HDDS-10832. Client should switch to streaming based on OpenKeySession…
adoroszlai May 22, 2024
cff3941
HDDS-10924. TestSCMHAManagerImpl#testAddSCM fails on ratis master (#6…
adoroszlai May 31, 2024
4bf580a
HDDS-10999. Remove dependency on ratis-server from Ozone Client (#6800)
adoroszlai Jun 10, 2024
5fde0fb
HDDS-11013. Ensure version is always set in ContainerCommandRequestPr…
swamirishi Jun 17, 2024
91567d9
HDDS-10983. EC Key read corruption when the replica index of containe…
swamirishi Jun 20, 2024
e2a8671
HDDS-10910. Bump Ratis to 3.1.0 (#6872)
adoroszlai Jun 27, 2024
ec7ceea
HDDS-11104. Bump maven-dependency-plugin to 3.7.1 (#6903)
dependabot[bot] Jul 6, 2024
955619b
HDDS-11172. Bump vite to 4.5.3 (#6918)
dependabot[bot] Jul 13, 2024
2b67ac9
HDDS-11186. First container log missing from bundle (#6952)
adoroszlai Jul 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -135,13 +135,17 @@ jobs:
- build-info
- build
- basic
runs-on: ubuntu-20.04
timeout-minutes: 30
if: needs.build-info.outputs.needs-compile == 'true'
strategy:
matrix:
java: [ 11, 17 ]
include:
- os: ubuntu-20.04
- java: 8
os: macos-12
fail-fast: false
runs-on: ${{ matrix.os }}
steps:
- name: Download Ozone source tarball
uses: actions/download-artifact@v4
Expand Down
25 changes: 14 additions & 11 deletions dev-support/ci/selective_ci_checks.bats
Original file line number Diff line number Diff line change
Expand Up @@ -177,17 +177,20 @@ load bats-assert/load.bash
assert_output -p needs-kubernetes-tests=false
}

@test "native test in other module" {
run dev-support/ci/selective_ci_checks.sh 7d01cc14a6

assert_output -p 'basic-checks=["rat","author","checkstyle","findbugs","native","unit"]'
assert_output -p needs-build=true
assert_output -p needs-compile=true
assert_output -p needs-compose-tests=false
assert_output -p needs-dependency-check=false
assert_output -p needs-integration-tests=false
assert_output -p needs-kubernetes-tests=false
}
# disabled, because this test fails if
# hadoop-hdds/rocksdb-checkpoint-differ/src/test/java/org/apache/ozone/rocksdb/util/TestManagedSstFileReader.java
# is not present in the current tree (i.e. if file is renamed, moved or deleted)
#@test "native test in other module" {
# run dev-support/ci/selective_ci_checks.sh 7d01cc14a6
#
# assert_output -p 'basic-checks=["rat","author","checkstyle","findbugs","native","unit"]'
# assert_output -p needs-build=true
# assert_output -p needs-compile=true
# assert_output -p needs-compose-tests=false
# assert_output -p needs-dependency-check=false
# assert_output -p needs-integration-tests=false
# assert_output -p needs-kubernetes-tests=false
#}

@test "kubernetes only" {
run dev-support/ci/selective_ci_checks.sh 5336bb9bd
Expand Down
2 changes: 1 addition & 1 deletion hadoop-hdds/client/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<artifactId>mockito-inline</artifactId>
<scope>test</scope>
</dependency>
<dependency>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,23 @@ public enum ChecksumCombineMode {
tags = ConfigTag.CLIENT)
private int retryInterval = 0;

@Config(key = "read.max.retries",
defaultValue = "3",
description = "Maximum number of retries by Ozone Client on "
+ "encountering connectivity exception when reading a key.",
tags = ConfigTag.CLIENT)
private int maxReadRetryCount = 3;

@Config(key = "read.retry.interval",
defaultValue = "1",
description =
"Indicates the time duration in seconds a client will wait "
+ "before retrying a read key request on encountering "
+ "a connectivity excepetion from Datanodes . "
+ "By default the interval is 1 second",
tags = ConfigTag.CLIENT)
private int readRetryInterval = 1;

@Config(key = "checksum.type",
defaultValue = "CRC32",
description = "The checksum type [NONE/ CRC32/ CRC32C/ SHA256/ MD5] "
Expand Down Expand Up @@ -326,6 +343,22 @@ public void setRetryInterval(int retryInterval) {
this.retryInterval = retryInterval;
}

public int getMaxReadRetryCount() {
return maxReadRetryCount;
}

public void setMaxReadRetryCount(int maxReadRetryCount) {
this.maxReadRetryCount = maxReadRetryCount;
}

public int getReadRetryInterval() {
return readRetryInterval;
}

public void setReadRetryInterval(int readRetryInterval) {
this.readRetryInterval = readRetryInterval;
}

public ChecksumType getChecksumType() {
return ChecksumType.valueOf(checksumType);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@
import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
import org.apache.hadoop.hdds.tracing.GrpcClientInterceptor;
import org.apache.hadoop.hdds.tracing.TracingUtil;
import org.apache.hadoop.ozone.ClientVersion;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.OzoneConsts;
import java.util.concurrent.TimeoutException;
Expand Down Expand Up @@ -274,6 +275,11 @@ public ContainerCommandResponseProto sendCommand(
List<DatanodeDetails> datanodeList = pipeline.getNodes();
HashMap<DatanodeDetails, CompletableFuture<ContainerCommandResponseProto>>
futureHashMap = new HashMap<>();
if (!request.hasVersion()) {
ContainerCommandRequestProto.Builder builder = ContainerCommandRequestProto.newBuilder(request);
builder.setVersion(ClientVersion.CURRENT.toProtoValue());
request = builder.build();
}
for (DatanodeDetails dn : datanodeList) {
try {
futureHashMap.put(dn, sendCommandAsync(request, dn).getResponse());
Expand Down Expand Up @@ -334,10 +340,13 @@ private XceiverClientReply sendCommandWithTraceIDAndRetry(

return TracingUtil.executeInNewSpan(spanName,
() -> {
ContainerCommandRequestProto finalPayload =
ContainerCommandRequestProto.Builder builder =
ContainerCommandRequestProto.newBuilder(request)
.setTraceID(TracingUtil.exportCurrentSpan()).build();
return sendCommandWithRetry(finalPayload, validators);
.setTraceID(TracingUtil.exportCurrentSpan());
if (!request.hasVersion()) {
builder.setVersion(ClientVersion.CURRENT.toProtoValue());
}
return sendCommandWithRetry(builder.build(), validators);
});
}

Expand Down Expand Up @@ -457,12 +466,14 @@ public XceiverClientReply sendCommandAsync(

try (Scope ignored = GlobalTracer.get().activateSpan(span)) {

ContainerCommandRequestProto finalPayload =
ContainerCommandRequestProto.Builder builder =
ContainerCommandRequestProto.newBuilder(request)
.setTraceID(TracingUtil.exportCurrentSpan())
.build();
.setTraceID(TracingUtil.exportCurrentSpan());
if (!request.hasVersion()) {
builder.setVersion(ClientVersion.CURRENT.toProtoValue());
}
XceiverClientReply asyncReply =
sendCommandAsync(finalPayload, pipeline.getFirstNode());
sendCommandAsync(builder.build(), pipeline.getFirstNode());
if (shouldBlockAndWaitAsyncReply(request)) {
asyncReply.getResponse().get();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.DatanodeBlockID;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.GetBlockResponseProto;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.scm.OzoneClientConfig;
import org.apache.hadoop.hdds.scm.XceiverClientFactory;
import org.apache.hadoop.hdds.scm.XceiverClientSpi;
import org.apache.hadoop.hdds.scm.XceiverClientSpi.Validator;
Expand Down Expand Up @@ -76,8 +77,8 @@ public class BlockInputStream extends BlockExtendedInputStream {
private XceiverClientSpi xceiverClient;
private boolean initialized = false;
// TODO: do we need to change retrypolicy based on exception.
private final RetryPolicy retryPolicy =
HddsClientUtils.createRetryPolicy(3, TimeUnit.SECONDS.toMillis(1));
private final RetryPolicy retryPolicy;

private int retries;

// List of ChunkInputStreams, one for each chunk in the block
Expand Down Expand Up @@ -112,25 +113,29 @@ public class BlockInputStream extends BlockExtendedInputStream {
private final Function<BlockID, BlockLocationInfo> refreshFunction;

public BlockInputStream(BlockID blockId, long blockLen, Pipeline pipeline,
Token<OzoneBlockTokenIdentifier> token, boolean verifyChecksum,
Token<OzoneBlockTokenIdentifier> token,
XceiverClientFactory xceiverClientFactory,
Function<BlockID, BlockLocationInfo> refreshFunction) {
Function<BlockID, BlockLocationInfo> refreshFunction,
OzoneClientConfig config) throws IOException {
this.blockID = blockId;
this.length = blockLen;
setPipeline(pipeline);
tokenRef.set(token);
this.verifyChecksum = verifyChecksum;
this.verifyChecksum = config.isChecksumVerify();
this.xceiverClientFactory = xceiverClientFactory;
this.refreshFunction = refreshFunction;
this.retryPolicy =
HddsClientUtils.createRetryPolicy(config.getMaxReadRetryCount(),
TimeUnit.SECONDS.toMillis(config.getReadRetryInterval()));
}

public BlockInputStream(BlockID blockId, long blockLen, Pipeline pipeline,
Token<OzoneBlockTokenIdentifier> token,
boolean verifyChecksum,
XceiverClientFactory xceiverClientFactory
) {
this(blockId, blockLen, pipeline, token, verifyChecksum,
xceiverClientFactory, null);
XceiverClientFactory xceiverClientFactory,
OzoneClientConfig config
) throws IOException {
this(blockId, blockLen, pipeline, token,
xceiverClientFactory, null, config);
}
/**
* Initialize the BlockInputStream. Get the BlockData (list of chunks) from
Expand Down Expand Up @@ -239,33 +244,28 @@ protected List<ChunkInfo> getChunkInfoList() throws IOException {

@VisibleForTesting
protected List<ChunkInfo> getChunkInfoListUsingClient() throws IOException {
final Pipeline pipeline = xceiverClient.getPipeline();

Pipeline pipeline = pipelineRef.get();
if (LOG.isDebugEnabled()) {
LOG.debug("Initializing BlockInputStream for get key to access {}",
blockID.getContainerID());
}

DatanodeBlockID.Builder blkIDBuilder =
DatanodeBlockID.newBuilder().setContainerID(blockID.getContainerID())
.setLocalID(blockID.getLocalID())
.setBlockCommitSequenceId(blockID.getBlockCommitSequenceId());

int replicaIndex = pipeline.getReplicaIndex(pipeline.getClosestNode());
if (replicaIndex > 0) {
blkIDBuilder.setReplicaIndex(replicaIndex);
LOG.debug("Initializing BlockInputStream for get key to access {} with pipeline {}.",
blockID.getContainerID(), pipeline);
}

GetBlockResponseProto response = ContainerProtocolCalls.getBlock(
xceiverClient, VALIDATORS, blkIDBuilder.build(), tokenRef.get());
xceiverClient, VALIDATORS, blockID, tokenRef.get(), pipeline.getReplicaIndexes());

return response.getBlockData().getChunksList();
}

private void setPipeline(Pipeline pipeline) {
private void setPipeline(Pipeline pipeline) throws IOException {
if (pipeline == null) {
return;
}
long replicaIndexes = pipeline.getNodes().stream().mapToInt(pipeline::getReplicaIndex).distinct().count();

if (replicaIndexes > 1) {
throw new IOException(String.format("Pipeline: %s has nodes containing different replica indexes.",
pipeline));
}

// irrespective of the container state, we will always read via Standalone
// protocol.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
import org.apache.hadoop.fs.CanUnbuffer;
import org.apache.hadoop.fs.Seekable;
import org.apache.hadoop.hdds.client.BlockID;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
Expand Down Expand Up @@ -60,6 +62,7 @@ public class ChunkInputStream extends InputStream
private final ChunkInfo chunkInfo;
private final long length;
private final BlockID blockID;
private ContainerProtos.DatanodeBlockID datanodeBlockID;
private final XceiverClientFactory xceiverClientFactory;
private XceiverClientSpi xceiverClient;
private final Supplier<Pipeline> pipelineSupplier;
Expand Down Expand Up @@ -290,13 +293,27 @@ protected synchronized void releaseClient() {
}
}

/**
* Updates DatanodeBlockId which based on blockId.
*/
private void updateDatanodeBlockId(Pipeline pipeline) throws IOException {
DatanodeDetails closestNode = pipeline.getClosestNode();
int replicaIdx = pipeline.getReplicaIndex(closestNode);
ContainerProtos.DatanodeBlockID.Builder builder = blockID.getDatanodeBlockIDProtobufBuilder();
if (replicaIdx > 0) {
builder.setReplicaIndex(replicaIdx);
}
datanodeBlockID = builder.build();
}

/**
* Acquire new client if previous one was released.
*/
protected synchronized void acquireClient() throws IOException {
if (xceiverClientFactory != null && xceiverClient == null) {
xceiverClient = xceiverClientFactory.acquireClientForReadData(
pipelineSupplier.get());
Pipeline pipeline = pipelineSupplier.get();
xceiverClient = xceiverClientFactory.acquireClientForReadData(pipeline);
updateDatanodeBlockId(pipeline);
}
}

Expand Down Expand Up @@ -422,8 +439,8 @@ protected ByteBuffer[] readChunk(ChunkInfo readChunkInfo)
throws IOException {

ReadChunkResponseProto readChunkResponse =
ContainerProtocolCalls.readChunk(xceiverClient,
readChunkInfo, blockID, validators, tokenSupplier.get());
ContainerProtocolCalls.readChunk(xceiverClient, readChunkInfo, datanodeBlockID, validators,
tokenSupplier.get());

if (readChunkResponse.hasData()) {
return readChunkResponse.getData().asReadOnlyByteBufferList()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,17 +115,28 @@ ContainerCommandResponseProto> executePutBlock(boolean close,
}

BlockData checksumBlockData = null;
BlockID blockID = null;
//Reverse Traversal as all parity will have checksumBytes
for (int i = blockData.length - 1; i >= 0; i--) {
BlockData bd = blockData[i];
if (bd == null) {
continue;
}
if (blockID == null) {
// store the BlockID for logging
blockID = bd.getBlockID();
}
List<ChunkInfo> chunks = bd.getChunks();
if (chunks != null && chunks.size() > 0 && chunks.get(0)
.hasStripeChecksum()) {
checksumBlockData = bd;
break;
if (chunks != null && chunks.size() > 0) {
if (chunks.get(0).hasStripeChecksum()) {
checksumBlockData = bd;
break;
} else {
ChunkInfo chunk = chunks.get(0);
LOG.debug("The first chunk in block with index {} does not have stripeChecksum. BlockID: {}, Block " +
"size: {}. Chunk length: {}, Chunk offset: {}, hasChecksumData: {}, chunks size: {}.", i,
bd.getBlockID(), bd.getSize(), chunk.getLen(), chunk.getOffset(), chunk.hasChecksumData(), chunks.size());
}
}
}

Expand Down Expand Up @@ -158,9 +169,8 @@ ContainerCommandResponseProto> executePutBlock(boolean close,
getContainerBlockData().clearChunks();
getContainerBlockData().addAllChunks(newChunkList);
} else {
throw new IOException("None of the block data have checksum " +
"which means " + parity + "(parity)+1 blocks are " +
"not present");
LOG.warn("Could not find checksum data in any index for blockData with BlockID {}, length {} and " +
"blockGroupLength {}.", blockID, blockData.length, blockGroupLength);
}

return executePutBlock(close, force, blockGroupLength);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,15 @@

import org.apache.hadoop.hdds.client.BlockID;
import org.apache.hadoop.hdds.client.ReplicationConfig;
import org.apache.hadoop.hdds.scm.OzoneClientConfig;
import org.apache.hadoop.hdds.scm.XceiverClientFactory;
import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
import org.apache.hadoop.hdds.scm.storage.BlockExtendedInputStream;
import org.apache.hadoop.hdds.scm.storage.BlockLocationInfo;
import org.apache.hadoop.hdds.security.token.OzoneBlockTokenIdentifier;
import org.apache.hadoop.security.token.Token;

import java.io.IOException;
import java.util.function.Function;

/**
Expand All @@ -48,8 +50,9 @@ public interface BlockInputStreamFactory {
*/
BlockExtendedInputStream create(ReplicationConfig repConfig,
BlockLocationInfo blockInfo, Pipeline pipeline,
Token<OzoneBlockTokenIdentifier> token, boolean verifyChecksum,
Token<OzoneBlockTokenIdentifier> token,
XceiverClientFactory xceiverFactory,
Function<BlockID, BlockLocationInfo> refreshFunction);
Function<BlockID, BlockLocationInfo> refreshFunction,
OzoneClientConfig config) throws IOException;

}
Loading