Skip to content

Conversation

@aswinshakil
Copy link
Member

What changes were proposed in this pull request?

Currently, We can reserve space on a volume using hdds.datanode.dir.du.reserved, which is key-value pair where we specify the volume and the space that should be reserved in that volume (i.e data1:5000MB). If there are multiple volumes we have put a key-value pair of volume:reserved for each volume. This PR aims to create a configuration that sets the percentage of volume reserved for all the volumes in the Datanode.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-6901

How was this patch tested?

Patch was tested manually using docker and unit tests.

Copy link
Contributor

@errose28 errose28 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @aswinshakil. I've added some comments in addition to Ritesh.

continue;
}
//Both the configs are set. Log it and return 0
if (reserveList.size() > 0 && percentage != defaultValue) {
Copy link
Contributor

@kerneltime kerneltime Jun 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the behavior be to pick the max of the 2 values? Would be less problematic than not honoring any reserved space?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior should be set to only one config. It is either by each volume or for all volumes.

Aswin Shakil Balasubramanian and others added 2 commits June 27, 2022 22:49
@kerneltime kerneltime merged commit 5d87d0e into apache:master Jul 11, 2022
@kerneltime
Copy link
Contributor

Thank you @aswinshakil for the contribution.

errose28 added a commit to errose28/ozone that referenced this pull request Jul 12, 2022
* master: (46 commits)
  HDDS-6901. Configure HDDS volume reserved as percentage of the volume space. (apache#3532)
  HDDS-6978. EC: Cleanup RECOVERING container on DN restarts (apache#3585)
  HDDS-6982. EC: Attempt to cleanup the RECOVERING container when reconstruction failed at coordinator. (apache#3583)
  HDDS-6968. Addendum: [Multi-Tenant] Fix USER_MISMATCH error even on correct user. (apache#3578)
  HDDS-6794. EC: Analyze and add putBlock even on non writing node in the case of partial single stripe. (apache#3514)
  HDDS-6900. Propagate TimeoutException for all SCM HA Ratis calls. (apache#3564)
  HDDS-6938. handle NPE when removing prefixAcl (apache#3568)
  HDDS-6960. EC: Implement the Over-replication Handler (apache#3572)
  HDDS-6979. Remove unused plexus dependency declaration (apache#3579)
  HDDS-6957. EC: ReplicationManager - priortise under replicated containers (apache#3574)
  HDDS-6723. Close Rocks objects properly in OzoneManager (apache#3400)
  HDDS-6942. Ozone Buckets/Objects created via S3 should not allow group access (apache#3553)
  HDDS-6965. Increase timeout for basic check (apache#3563)
  HDDS-6969. Add link to compose directory in smoketest README (apache#3567)
  HDDS-6970. EC: Ensure DatanodeAdminMonitor can handle EC containers during decommission (apache#3573)
  HDDS-6977. EC: Remove references to ContainerReplicaPendingOps in TestECContainerReplicaCount (apache#3575)
  HDDS-6217. Cleanup XceiverClientGrpc TODOs, and document how the client works and should be used. (apache#3012)
  HDDS-6773. Cleanup TestRDBTableStore (apache#3434) - fix checkstyle
  HDDS-6773. Cleanup TestRDBTableStore (apache#3434)
  HDDS-6676. KeyValueContainerData#getProtoBufMessage() should set block count (apache#3371)
  ...

Conflicts:
    hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/upgrade/SCMUpgradeFinalizer.java
duongkame pushed a commit to duongkame/ozone that referenced this pull request Aug 16, 2022
…f the volume space. apache#3532

Change-Id: Ifd904a9213c4ef18feed91879a15eea5ba6ea5ee
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants