Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
c33f8e0
HDDS-13460. [Docs] S3 secret storage. (#8824)
jojochuang Oct 23, 2025
cc73087
HDDS-13801. [Docs] ozone s3 getsecret command and REST API supports u…
jojochuang Oct 23, 2025
1b595f3
HDDS-13770. SstBackup Directory can have orphan files after bootstrap…
jojochuang Oct 23, 2025
65fb295
HDDS-13639. Optimize container iterator for frequent operation (#9147)
sarvekshayr Oct 24, 2025
b067b70
HDDS-13834. [Ozone 2.1] Update master branch version number (#9196)
chungen0126 Oct 25, 2025
ddef96e
HDDS-13837. Bump jnr-posix to 3.1.21 (#9200)
dependabot[bot] Oct 25, 2025
87dbdcc
HDDS-13838. Bump zstd-jni to 1.5.7-6 (#9197)
dependabot[bot] Oct 25, 2025
7e8e0ad
HDDS-13836. Bump exec-maven-plugin to 3.6.2 (#9199)
dependabot[bot] Oct 25, 2025
1390527
HDDS-13717. Bump Bouncy Castle to 1.82 (#9054)
dependabot[bot] Oct 25, 2025
94ea432
HDDS-13839. Bump awssdk to 2.36.2 (#9198)
dependabot[bot] Oct 27, 2025
798c4aa
HDDS-13843. Add Open Containers card in new UI (#9204).
spacemonkd Oct 28, 2025
d5be986
HDDS-13840. Reset Namespace metadata pagination when path changes (#9…
spacemonkd Oct 28, 2025
b39bac0
HDDS-13783. Implement locks for OmSnapshotLocalDataManager (#9140)
swamirishi Oct 29, 2025
388f3d2
HDDS-13400. S3g has accumulated memory pressure due to unlimited Elas…
Gargi-jais11 Oct 29, 2025
2806bae
HDDS-13004. Snapshot Cache lock on a specific snapshotId (#9210)
swamirishi Oct 29, 2025
d4e7d94
HDDS-13167. Add example for uploading file via HttpFS (#9175)
unknowntpo Oct 29, 2025
8a5c4e8
HDDS-12749. Use EnumCounters instead Map<Type, Integer> for command c…
sarvekshayr Oct 29, 2025
f30870f
HDDS-13841. Namespace summary API gives wrong count of directories an…
ArafatKhan2198 Oct 30, 2025
fb706e7
HDDS-13831. Refine set role logic in getServicelist (#9191)
symious Oct 31, 2025
8bd70b7
HDDS-13833. Add transactionInfo field in SnapshotLocalData and update…
swamirishi Oct 31, 2025
a8b8607
HDDS-13856. Change SstFileInfo to track fileName as the name of the f…
swamirishi Oct 31, 2025
e2e862e
HDDS-13859. OmSnapshotLocalDataManager should handle needsDefrag flag…
swamirishi Oct 31, 2025
72167cf
HDDS-13860. RocksDatabase#open leaks column family handles when faili…
smengcl Oct 31, 2025
c21ec5d
HDDS-13847. Introduce Snapshot Content Lock to lock table contents (#…
swamirishi Nov 1, 2025
833e955
HDDS-13822. Add regression testing for OM epoch and txId calculation …
rich7420 Nov 1, 2025
4d6f3a5
HDDS-13772. Snapshot Paths to be re read from om checkpoint db inside…
sadanand48 Nov 2, 2025
1de5c2f
HDDS-13755. Add doc for ozone sh snapshot listDiff command (#9238)
rich7420 Nov 2, 2025
29a9d0f
HDDS-13871. Bump awssdk to 2.37.3 (#9233)
dependabot[bot] Nov 3, 2025
55bd1f1
HDDS-13851. Remove extra OzoneConfiguration#of from OzoneFileSystem#i…
ivandika3 Nov 3, 2025
25cceef
HDDS-13872. Bump junit to 5.14.1 (#9232)
dependabot[bot] Nov 3, 2025
991a291
HDDS-13485. Reduce duplication between ContainerSafeModeRule tests (#…
kousei47747 Nov 3, 2025
5c35ebb
HDDS-13830. Snapshot Rocks DB directory path computation based on loc…
swamirishi Nov 3, 2025
be8567e
HDDS-13858. Add permission check and test in getFileStatus (#9237)
rich7420 Nov 4, 2025
af123a5
HDDS-13640. Add CLI that allows manually triggering snapshot defrag (…
smengcl Nov 5, 2025
be3b828
HDDS-13823. Initial s3v volume cache entry will not be evicted until …
0lai0 Nov 5, 2025
bc577ae
HDDS-13868. Add unit test coverage for OMNodeDetails (#9245)
0lai0 Nov 5, 2025
51deb3c
HDDS-13826. Move ACL check in OMKeySetTimesRequest (#9192)
ss77892 Nov 5, 2025
5ab59c9
HDDS-13737. S3 ETag JSON should be quoted (#9248)
echonesis Nov 6, 2025
61cf1f7
HDDS-13178. Include block size in delete request and pass it to SCM. …
priyeshkaratha Nov 6, 2025
63cd56c
HDDS-13184. Persist Block Size in Delete Transaction for SCM (#8845)
ChenSammi Aug 6, 2025
5cbed04
fixing errors and conflicts
priyeshkaratha Nov 6, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dev-support/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>ozone-main</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
</parent>
<artifactId>ozone-dev-support</artifactId>
<name>Apache Ozone Dev Support</name>
Expand Down
4 changes: 2 additions & 2 deletions hadoop-hdds/annotations/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
</parent>

<artifactId>hdds-annotation-processing</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>Apache Ozone Annotation Processing</name>
<description>Apache Ozone annotation processing tools for validating custom
Expand Down
4 changes: 2 additions & 2 deletions hadoop-hdds/client/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds-hadoop-dependency-client</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<relativePath>../hadoop-dependency-client</relativePath>
</parent>

<artifactId>hdds-client</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>Apache Ozone HDDS Client</name>
<description>Apache Ozone Distributed Data Store Client Library</description>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.hadoop.ozone.client.io;

import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ComparisonChain;
import java.nio.ByteBuffer;
import java.util.Map;
import java.util.TreeMap;
import java.util.concurrent.atomic.AtomicLong;
import org.apache.commons.lang3.builder.HashCodeBuilder;
import org.apache.hadoop.io.ByteBufferPool;

/**
* A bounded version of ElasticByteBufferPool that limits the total size
* of buffers that can be cached in the pool. This prevents unbounded memory
* growth in long-lived rpc clients like S3 Gateway.
*
* When the pool reaches its maximum size, newly returned buffers are not
* added back to the pool and will be garbage collected instead.
*/
public class BoundedElasticByteBufferPool implements ByteBufferPool {
private final TreeMap<Key, ByteBuffer> buffers = new TreeMap<>();
private final TreeMap<Key, ByteBuffer> directBuffers = new TreeMap<>();
private final long maxPoolSize;
private final AtomicLong currentPoolSize = new AtomicLong(0);

/**
* A logical timestamp counter used for creating unique Keys in the TreeMap.
* This is used as the insertionTime for the Key instead of System.nanoTime()
* to guarantee uniqueness and avoid a potential spin-wait in putBuffer
* if two buffers of the same capacity are added at the same nanosecond.
*/
private long logicalTimestamp = 0;

public BoundedElasticByteBufferPool(long maxPoolSize) {
super();
this.maxPoolSize = maxPoolSize;
}

private TreeMap<Key, ByteBuffer> getBufferTree(boolean direct) {
return direct ? this.directBuffers : this.buffers;
}

@Override
public synchronized ByteBuffer getBuffer(boolean direct, int length) {
TreeMap<Key, ByteBuffer> tree = this.getBufferTree(direct);
Map.Entry<Key, ByteBuffer> entry = tree.ceilingEntry(new Key(length, 0L));
if (entry == null) {
// Pool is empty or has no suitable buffer. Allocate a new one.
return direct ? ByteBuffer.allocateDirect(length) : ByteBuffer.allocate(length);
}
tree.remove(entry.getKey());
ByteBuffer buffer = entry.getValue();

// Decrement the size because we are taking a buffer OUT of the pool.
currentPoolSize.addAndGet(-buffer.capacity());
buffer.clear();
return buffer;
}

@Override
public synchronized void putBuffer(ByteBuffer buffer) {
if (buffer == null) {
return;
}

if (currentPoolSize.get() + buffer.capacity() > maxPoolSize) {
// Pool is full, do not add the buffer back.
// It will be garbage collected by JVM.
return;
}

buffer.clear();
TreeMap<Key, ByteBuffer> tree = getBufferTree(buffer.isDirect());
Key key = new Key(buffer.capacity(), logicalTimestamp++);

tree.put(key, buffer);
// Increment the size because we have successfully added buffer back to the pool.
currentPoolSize.addAndGet(buffer.capacity());
}

/**
* Get the current size of buffers in the pool.
*
* @return Current pool size in bytes
*/
@VisibleForTesting
public synchronized long getCurrentPoolSize() {
return currentPoolSize.get();
}

/**
* The Key for the buffer TreeMaps.
* This is copied directly from the original ElasticByteBufferPool.
*/
protected static final class Key implements Comparable<Key> {
private final int capacity;
private final long insertionTime;

Key(int capacity, long insertionTime) {
this.capacity = capacity;
this.insertionTime = insertionTime;
}

@Override
public int compareTo(Key other) {
return ComparisonChain.start()
.compare(this.capacity, other.capacity)
.compare(this.insertionTime, other.insertionTime)
.result();
}

@Override
public boolean equals(Object rhs) {
if (rhs == null) {
return false;
}
try {
Key o = (Key) rhs;
return compareTo(o) == 0;
} catch (ClassCastException e) {
return false;
}
}

@Override
public int hashCode() {
return new HashCodeBuilder().append(capacity).append(insertionTime)
.toHashCode();
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.hadoop.ozone.client.io;

import java.nio.ByteBuffer;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;

/**
* Unit tests for BoundedElasticByteBufferPool.
*/
public class TestBoundedElasticByteBufferPool {

private static final int MB = 1024 * 1024;
private static final long MAX_POOL_SIZE = 3L * MB; // 3MB

@Test
public void testLogicalTimestampOrdering() {
// Pool with plenty of capacity
BoundedElasticByteBufferPool pool = new BoundedElasticByteBufferPool(MAX_POOL_SIZE);
int bufferSize = 5 * 1024; // 5KB

// Create and add three distinct buffers of the same size
ByteBuffer buffer1 = ByteBuffer.allocate(bufferSize);
ByteBuffer buffer2 = ByteBuffer.allocate(bufferSize);
ByteBuffer buffer3 = ByteBuffer.allocate(bufferSize);

// Store their unique identity hash codes
int hash1 = System.identityHashCode(buffer1);
int hash2 = System.identityHashCode(buffer2);
int hash3 = System.identityHashCode(buffer3);

pool.putBuffer(buffer1);
pool.putBuffer(buffer2);
pool.putBuffer(buffer3);

// The pool should now contain 15KB data
Assertions.assertEquals(bufferSize * 3L, pool.getCurrentPoolSize());

// Get the buffers back. They should come back in the same
// order they were put in (FIFO).
ByteBuffer retrieved1 = pool.getBuffer(false, bufferSize);
ByteBuffer retrieved2 = pool.getBuffer(false, bufferSize);
ByteBuffer retrieved3 = pool.getBuffer(false, bufferSize);

// Verify we got the exact same buffer instances back in FIFO order
Assertions.assertEquals(hash1, System.identityHashCode(retrieved1));
Assertions.assertEquals(hash2, System.identityHashCode(retrieved2));
Assertions.assertEquals(hash3, System.identityHashCode(retrieved3));

// The pool should now be empty
Assertions.assertEquals(0, pool.getCurrentPoolSize());
}

/**
* Verifies the core feature: the pool stops caching buffers
* once its maximum size is reached.
*/
@Test
public void testPoolBoundingLogic() {
BoundedElasticByteBufferPool pool = new BoundedElasticByteBufferPool(MAX_POOL_SIZE);

ByteBuffer buffer1 = ByteBuffer.allocate(2 * MB);
ByteBuffer buffer2 = ByteBuffer.allocate(1 * MB);
ByteBuffer buffer3 = ByteBuffer.allocate(3 * MB);

int hash1 = System.identityHashCode(buffer1);
int hash2 = System.identityHashCode(buffer2);
int hash3 = System.identityHashCode(buffer3);

// 1. Put buffer 1 (Pool size: 2MB, remaining: 1MB)
pool.putBuffer(buffer1);
Assertions.assertEquals(2 * MB, pool.getCurrentPoolSize());

// 2. Put buffer 2 (Pool size: 2MB + 1MB = 3MB, remaining: 0)
// The check is (current(2MB) + new(1MB)) > max(3MB), which is false.
// So, the buffer IS added.
pool.putBuffer(buffer2);
Assertions.assertEquals(3 * MB, pool.getCurrentPoolSize());

// 3. Put buffer 3 (Capacity 3MB)
// The check is (current(3MB) + new(3MB)) > max(3MB), which is true.
// This buffer should be REJECTED.
pool.putBuffer(buffer3);
// The pool size should NOT change.
Assertions.assertEquals(3 * MB, pool.getCurrentPoolSize());

// 4. Get buffers back
ByteBuffer retrieved1 = pool.getBuffer(false, 2 * MB);
ByteBuffer retrieved2 = pool.getBuffer(false, 1 * MB);

// The pool should now be empty
Assertions.assertEquals(0, pool.getCurrentPoolSize());

// 5. Ask for a third buffer.
// Since buffer3 was rejected, this should be a NEWLY allocated buffer.
ByteBuffer retrieved3 = pool.getBuffer(false, 3 * MB);

// Verify that we got the first two buffers from the pool
Assertions.assertEquals(hash1, System.identityHashCode(retrieved1));
Assertions.assertEquals(hash2, System.identityHashCode(retrieved2));

// Verify that the third buffer is a NEW instance, not buffer3
Assertions.assertNotEquals(hash3, System.identityHashCode(retrieved3));
}
}
4 changes: 2 additions & 2 deletions hadoop-hdds/common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds-hadoop-dependency-client</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<relativePath>../hadoop-dependency-client</relativePath>
</parent>
<artifactId>hdds-common</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>Apache Ozone HDDS Common</name>
<description>Apache Ozone Distributed Data Store Common</description>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ public enum HDDSLayoutFeature implements LayoutFeature {
"to DatanodeDetails."),
HBASE_SUPPORT(8, "Datanode RocksDB Schema Version 3 has an extra table " +
"for the last chunk of blocks to support HBase.)"),
WITNESSED_CONTAINER_DB_PROTO_VALUE(9, "ContainerID table schema to use value type as proto");
WITNESSED_CONTAINER_DB_PROTO_VALUE(9, "ContainerID table schema to use value type as proto"),
STORAGE_DATA_DISTRIBUTION(10, "ContainerID table schema to use value type as proto");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The description for STORAGE_DATA_DISTRIBUTION appears to be a copy-paste from the WITNESSED_CONTAINER_DB_PROTO_VALUE enum constant. Please update it to accurately describe the new layout feature, which seems to be related to persisting block sizes in deleted block transactions.

Suggested change
STORAGE_DATA_DISTRIBUTION(10, "ContainerID table schema to use value type as proto");
STORAGE_DATA_DISTRIBUTION(10, "Persist block size in deleted block transactions.");


////////////////////////////// //////////////////////////////

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -690,6 +690,10 @@ public final class OzoneConfigKeys {
"ozone.security.crypto.compliance.mode";
public static final String OZONE_SECURITY_CRYPTO_COMPLIANCE_MODE_UNRESTRICTED = "unrestricted";

public static final String OZONE_CLIENT_ELASTIC_BYTE_BUFFER_POOL_MAX_SIZE =
"ozone.client.elastic.byte.buffer.pool.max.size";
public static final String OZONE_CLIENT_ELASTIC_BYTE_BUFFER_POOL_MAX_SIZE_DEFAULT = "16GB";

/**
* There is no need to instantiate this class.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,7 @@ public final class OzoneConsts {
public static final String OM_SST_FILE_INFO_START_KEY = "startKey";
public static final String OM_SST_FILE_INFO_END_KEY = "endKey";
public static final String OM_SST_FILE_INFO_COL_FAMILY = "columnFamily";
public static final String OM_SLD_TXN_INFO = "transactionInfo";

// YAML fields for .container files
public static final String CONTAINER_ID = "containerID";
Expand Down
12 changes: 12 additions & 0 deletions hadoop-hdds/common/src/main/resources/ozone-default.xml
Original file line number Diff line number Diff line change
Expand Up @@ -465,6 +465,18 @@
<description>Socket timeout for Ozone client. Unit could be defined with
postfix (ns,ms,s,m,h,d)</description>
</property>
<property>
<name>ozone.client.elastic.byte.buffer.pool.max.size</name>
<value>16GB</value>
<tag>OZONE, CLIENT</tag>
<description>
The maximum total size of buffers that can be cached in the client-side
ByteBufferPool. This pool is used heavily during EC read and write operations.
Setting a limit prevents unbounded memory growth in long-lived rpc clients
like the S3 Gateway. Once this limit is reached, used buffers are not
put back to the pool and will be garbage collected.
</description>
</property>
<property>
<name>ozone.key.deleting.limit.per.task</name>
<value>50000</value>
Expand Down
4 changes: 2 additions & 2 deletions hadoop-hdds/config/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
</parent>
<artifactId>hdds-config</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>Apache Ozone HDDS Config</name>
<description>Apache Ozone Distributed Data Store Config Tools</description>
Expand Down
4 changes: 2 additions & 2 deletions hadoop-hdds/container-service/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@
<parent>
<groupId>org.apache.ozone</groupId>
<artifactId>hdds</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
</parent>
<artifactId>hdds-container-service</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>Apache Ozone HDDS Container Service</name>
<description>Apache Ozone Distributed Data Store Container Service</description>
Expand Down
Loading