-
Notifications
You must be signed in to change notification settings - Fork 588
HDDS-12928. datanode min free space configuration #8388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
As discussion over community and above discussion, Approach 2 is better. Another point came for default min.free.space -- instead of |
...c/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java
Show resolved
Hide resolved
...c/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java
Show resolved
Hide resolved
...c/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java
Show resolved
Hide resolved
...st/java/org/apache/hadoop/ozone/container/common/statemachine/TestDatanodeConfiguration.java
Show resolved
Hide resolved
| import static org.apache.hadoop.ozone.container.common.statemachine.DatanodeConfiguration.FAILED_DB_VOLUMES_TOLERATED_KEY; | ||
| import static org.apache.hadoop.ozone.container.common.statemachine.DatanodeConfiguration.FAILED_METADATA_VOLUMES_TOLERATED_KEY; | ||
| import static org.apache.hadoop.ozone.container.common.statemachine.DatanodeConfiguration.FAILED_VOLUMES_TOLERATED_DEFAULT; | ||
| import static org.apache.hadoop.ozone.container.common.statemachine.DatanodeConfiguration.HDDS_DATANODE_VOLUME_MIN_FREE_SPACE_PERCENT_DEFAULT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a new unit test which doesn't explicitly set any of the two properties.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is considered in org.apache.hadoop.ozone.container.common.statemachine.TestDatanodeConfiguration#isCreatedWitDefaultValues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isCreatedWitDefaultValues unsets DatanodeConfiguration.HDDS_DATANODE_VOLUME_MIN_FREE_SPACE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unset ensure default value is used in ozone configuration, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unset is done for ozone-site.xml as defined in test module, so that it can use default value if not defined. comment added.
...c/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java
Outdated
Show resolved
Hide resolved
...c/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java
Outdated
Show resolved
Hide resolved
|
|
||
| @Test | ||
| void useMinFreeSpaceIfBothMinFreeSpacePropertiesSet() { | ||
| void useMaxAsPercentIfBothMinFreeSpacePropertiesSet() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
useMaxAsPercentIfBothMinFreeSpacePropertiesSet ->
useMaxIfBothMinFreeSpacePropertiesSet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
I think for small changes, it should be ok to put the design doc and the code change together. If the PR is churning on the design doc we can keep the PR in draft mode and switch it to ready once design doc looks good and the code works in the fork. This would make it much easier to to browse the git log and understand the code changes done in one go. From a usability stand point I would much rather have a small design doc live together with a code change when browsing history. |
done |
That's a good point. It should be fine, if developers can follow one of:
|
smengcl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sumitagrawl for the patch.
| This case is more useful for test environment where disk space is less and no need any additional configuration. | ||
|
|
||
| # Conclusion | ||
| 1. Going with Approach 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think supporting both explicit size and percent is good, but there's a few issues still not addressed:
- Do we only want to support setting one global size for all volumes or supporting individual volume configs?
- If we are adjusting how
hdds.datanode.volume.min.free.spaceworks, we should also adjusthdds.datanode.dir.du.reservedto support configuration in a consistent way. - It is bad UX to have two different configs (percent and value) for the same thing. The user has no intuition as to what happens when both are configured.
- Having a
maxfunction buried in the code to resolve this instead of making them exclusive is even worse.
- Having a
Probably the most user friendly thing to do is deprecate the percent config keys and have one config that takes either a size or percent based value. Whether we want to continue supporting individual volume mappings in the config is still an open question that needs to be resolved in this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@errose28
The config is applicable for each volume as global config only, do not support individual volume. As min-free-space as ozone maintain is global in nature for each volume.
hdds.datanode.dir.du.reserved config simplification is not in scope of this JIRA/PR.
Using 2 config has been discussed in community meeting, and concluded to have both. Any concern now, need re-discuss over community again.
Single config: Approach "2" is not being opted with majority, and hence went with Approach 1 as max of 2. I have updated in design doc for both Approach 1 and approach 2 pros/cons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using 2 config has been discussed in community meeting, and concluded to have both. Any concern now, need re-discuss over community again.
Community meetings are for synchronous discussion, not definitive decisions. There are many other forums (mailing list, PRs, Jira, Github discussion). I think this kind of issue is fine for discussion in PR. If you are concerned about visibility, please discuss on mailing list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@errose28 after discussion over community, will go with Approach 1 only.
-
purpose of du.reserved config is identifying the disk to be reserved for the application sharing the disk, and hence its at disk level. But here, since its ozone managed space, this needs to be flat configuration. So both need not be same.
-
For simplicity for min.free.space config, its at global level, and may not be required to be disk level similar to reserved.
-
Max of min.free space and percent is done, min.free space represent min threshold for most of the disk ranges, and percent to be if some disk are exceptionally higher size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me try to add some guiding principles for config modifications which can help us compare one decision or another. The following are usability issues that can occur with config keys:
- Inconsistent config format: Configs that operate on similar entities (space usage, address + port, percentages) that read those values differently.
- Hidden config dependencies: When one configuration whose value is unchanged functions differently based on the value applied to a different config.
- This does not include invalid config combinations that fail component startup, since that is easily caught and called out with an error message. We know that no actively running system will have this configuration.
Both hdds.datanode.du.reserved{.percent} and hdds.datanode.min.free.space{.percent} have issues here, and this is our chance to fix them. Now let's look at how our options either help or hurt the above points.
Inconsistent config format
hdds.datanode.du.reserved and hdds.datanode.min.free.space are both used to configure space reservation on datanode drives, so as stated in point 1 it is most intuitive if they accept the same value format. It is ok if one format is more useful for one than another. For example per-volume configuration may be required for hdds.datanode.du.reserved but not for hdds.datanode.min.free.space. It's still ok for both to have that option because it is not invalid for hdds.datanode.min.free.space, there is still only one set of formatting options for users to remember, and only one parser in the code. If we pick and choose different valid formats for each config we will have two formats to remember and two parsers in the code. Therefore even removing allowed config formats from hdds.datanode.min.free.space that are still present in hdds.datanode.du.reserved actually adds complexity. Based on this hdds.datanode.du.reserved and hdds.datanode.min.free.space must accept values of the same format to avoid introducing new config usability problems.
Hidden config dependencies
Next let's look at how the percent variations affect point 2. Anything other than failing startup if the percent and non-percent variations are specified creates this problem, so if a percent and non-percent config key are given like hdds.datanode.min.free.space.percent and hdds.datanode.min.free.space it must be considered invalid and fail the datanode.
There is another option though: get rid of the percentage specific config keys but still support percentage based configuration with the one hdds.datanode.min.free.space config. Let's look at why this works:
hdds.datanode.du.reservedneeds to support volume specific configuration in the form of<volume-path>:reserved-sizesince not all volumes may be used as spill for compute, or the volumes may be utilized differently.- This means we will always have a parsing method like VolumeUsage#getReserved to handle converting config strings into long values for a volume.
hdds.datanode.min.free.spaceandhdds.datanode.du.reservedshould support the same value format, sohdds.datanode.min.free.spacealso needs to use this same parser.- If we are already need a string parser for both configs, we might as well make it differentiate between percentage and size based configs too.
Proposal to address all requirements
The following layout meets all the constraints defined above:
- Only two config keys:
hdds.datanode.min.free.spaceandhdds.datanode.du.reserved - The valid formats for either config key are:
- A fixed size, like
20GB - A percentage as a float, like
0.001. The lack of a unit differentiates it from the first option. - A mapping of volumes to sizes, like
/data/hdds1:20GB,/data/hdds2:10GB
- A fixed size, like
- Only one parser is required for both types of configs.
- This is not new since a parser is already required and cannot be removed without removing support for per-volume configuration in
hdds.datanode.du.reserved.
- This is not new since a parser is already required and cannot be removed without removing support for per-volume configuration in
We should never introduce usability issues in our configurations. We have enough of them already : ) If you can show how an alternate proposal meets all the configuration requirements without impacting usability we can consider that as well, but currently none of the proposals in the doc satisfy this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@errose28 You mean we need have another config for min.free.space?
min.free.space.volumes:/data/hdds1:20GB,/data/hdds2:10GB. -- to be similar to reserve? as similar to du.reserve
I do not feel being in name of similar config for space, we should go with this approach, These are if different purpose. Making similar just in name of both represent free space will make configuration complex for min.free.space as user need config for all disk. There is no usecase till not for min.free.space for this.
I do not agree with this approach. In future if there is a need for this for volume mapping for min.free.space, we can ad as separate requirement and handle.
Share your suggestion for this PR if can be merged .....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding one more to @errose28's list of requirements: cross-compatibility. When extending the possible values allowed for existing configuration, e.g.:
- adding suffix
- starting to support percentage
- allowing list of items instead of single one
we need to consider that even old version may encounter values understood only by new one, and fail. (See HDDS-13077 for a specific example.)
In such cases it may be better to deprecate the existing config properties and add new one(s).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sumitagrawl please re-read the Proposal to address all requirements section in my reply. I think this very clearly states the proposal but the things you are referring to in your reply are not mentioned there.
You mean we need have another config for min.free.space?
No, two configs, one for min free space and one for DU reserved that each use the same value schema. I very clearly said in the previous response "Only two config keys: hdds.datanode.min.free.space and hdds.datanode.du.reserved".
I do not feel being in name of similar config for space, we should go with this approach, These are if different purpose.
This is your take as developer. You need to look at this from a user's perspective. Our consistent failure to consider this perspective is why the system is difficult to use. Configs representing the same "type" of configuration, be it an address, percentage, disk space, time duration, etc must accept the same types of values. Users are not going to understand the nuance of why two similar configs accept different value formats, and in a few months I probably won't either.
Making similar just in name of both represent free space will make configuration complex for min.free.space as user need config for all disk.
This is not part of the proposal. Please re-read it. Min space can be configured with one value across all disks, OR it can use a volume mapping.
There is no usecase till not for min.free.space for this.
Lack of use case is not a valid reason to create a separate value schema for configs that work on the same type. There is also no use case for setting hdds.heartbeat.interval to 7d, but the same value makes perfect sense for hdds.container.scrub.data.scan.interval. Yet they use the same value schema because they both represent time intervals. Your suggestion is analogous to rejecting the d suffix for hdds.heartbeat.interval because it would never be set that long.
we need to consider that even old version may encounter values understood only by new one, and fail.
We definitely need to formalize our configuration compatibility guarantees. This probably warrants a dedicated discussion somewhere more visible. My initial take is that we should always support "new software old config", but that supporting "old software new config" is not sustainable because it closes our config for extensions. Especially on the server side this would seem like a deployment error. Maybe our client side config compat guarantees would be different from the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, two configs, one for min free space and one for DU reserved that each use the same value schema
DU reserved is special case carried from Hadoop, for case of disk sharing by other application. This may not be required to have same value Schema. This needs user input over various disk as sharing may differ, so this schema is specialized. They are not of same type.
This is your take as developer. You need to look at this from a user's perspective. Our consistent failure to consider this perspective is why the system is difficult to use.
From user perspective only, user have no knowledge how to configure the min-free-space, this is more internal to Ozone working.
volume mapping
This might be additional config can be added later on on need basis. May be we should not add just based on
intuition, as this may go to be dead config.
Please share any possible use case in practical env, we can take up this as enhancement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per disk configuration is an abomination that stems from needing to run other applications on nodes/drives along with HDFS in the past. It makes sense for the du config where essentially we tell the Datanode to spare a few drives. This is very different from the min configurations, which has to do with operations and uptime of applications. We must keep configurations for min same across all drives as it has to do with space for repairs and recovery and nothing to do with configuration of the cluster with regards to co-existing with peer applications.
I am all for consistency but in this case it implies a capability that I am not sure we wish to implement.
...est-s3/src/test/java/org/apache/hadoop/ozone/s3/awssdk/v1/TestS3SDKV1WithRatisStreaming.java
Outdated
Show resolved
Hide resolved
|
@errose28 @kerneltime please share opinion, based on this will merge this PR |
HDDS-13104. Move auditparser acceptance tests under debug HDDS-12714. Create acceptance test framework for debug and repair tools (apache#8510) HDDS-13026. KeyDeletingService should also delete RenameEntries (apache#8447) HDDS-12928. datanode min free space configuration (apache#8388) Move the test under test-debug-tools.sh Export om Kinit om Set hostname to om1 Export om and not om1 Remove unsecured code
# This is the 1st commit message: HDDS-13104. Move auditparser acceptance tests under debug # This is the commit message #2: HDDS-12714. Create acceptance test framework for debug and repair tools (apache#8510) # This is the commit message #3: HDDS-13026. KeyDeletingService should also delete RenameEntries (apache#8447) # This is the commit message #4: HDDS-12928. datanode min free space configuration (apache#8388) # This is the commit message #5: Move the test under test-debug-tools.sh # This is the commit message #6: Export om # This is the commit message #7: Kinit om # This is the commit message #8: Set hostname to om1
…239-container-reconciliation Commits: 80 commits 5e273a4 HDDS-12977. Fail build on dependency problems (apache#8574) 5081ba2 HDDS-13034. Refactor DirectoryDeletingService to use ReclaimableDirFilter and ReclaimableKeyFilter (apache#8546) e936e4d HDDS-12134. Implement Snapshot Cache lock for OM Bootstrap (apache#8474) 31d13de HDDS-13165. [Docs] Python client developer guide. (apache#8556) 9e6955e HDDS-13205. Bump common-custom-user-data-maven-extension to 2.0.3 (apache#8581) 750b629 HDDS-13203. Bump Bouncy Castle to 1.81 (apache#8580) ba5177e HDDS-13202. Bump build-helper-maven-plugin to 3.6.1 (apache#8579) 07ee5dd HDDS-13204. Bump awssdk to 2.31.59 (apache#8582) e1964f2 HDDS-13201. Bump jersey2 to 2.47 (apache#8578) 81295a5 HDDS-13013. [Snapshot] Add metrics and tests for snapshot operations. (apache#8436) b3d75ab HDDS-12976. Clean up unused dependencies (apache#8521) e0f08b2 HDDS-13179. rename-generated-config fails on re-compile without clean (apache#8569) f388317 HDDS-12554. Support callback on completed reconfiguration (apache#8391) c13a3fe HDDS-13154 Link more Grafana dashboard json files to the Observability user doc (apache#8533) 2a761f7 HDDS-11967. [Docs]DistCP Integration in Kerberized environment. (apache#8531) 81fc4c4 HDDS-12550. Use DatanodeID instead of UUID in NodeManager CommandQueue. (apache#8560) 2360af4 HDDS-13169. Intermittent failure in testSnapshotOperationsNotBlockedDuringCompaction (apache#8553) f19789d HDDS-13170. Reclaimable filter should always reclaim entries when buckets and volumes have already been deleted (apache#8551) 315ef20 HDDS-13175. Leftover reference to OM-specific trash implementation (apache#8563) 902e715 HDDS-13159. Refactor KeyManagerImpl for getting deleted subdirectories and deleted subFiles (apache#8538) 46a93d0 HDDS-12817. Addendum rename ecIndex to replicaIndex in chunkinfo output (apache#8552) 19b9b9c HDDS-13166. Set pipeline ID in BlockExistenceVerifier to avoid cached pipeline with different node (apache#8549) b3ff67c HDDS-13068. Validate Container Balancer move timeout and replication timeout configs (apache#8490) 7a7b9a8 HDDS-13139. Introduce bucket layout flag in freon rk command (apache#8539) 3c25e7d HDDS-12595. Add verifier for container replica states (apache#8422) 6d59220 HDDS-13104. Move auditparser acceptance test under debug (apache#8527) 8e8c432 HDDS-13071. Documentation for Container Replica Debugger Tool (apache#8485) 0e8c8d4 HDDS-13158. Bump junit to 5.13.0 (apache#8537) 8e552b4 HDDS-13157. Bump exec-maven-plugin to 3.5.1 (apache#8534) 168f690 HDDS-13155. Bump jline to 3.30.4 (apache#8535) cc1e4d1 HDDS-13156. Bump awssdk to 2.31.54 (apache#8536) 3bfb7af HDDS-13136. KeyDeleting Service should not run for already deep cleaned snapshots (apache#8525) 006e691 HDDS-12503. Compact snapshot DB before evicting a snapshot out of cache (apache#8141) 568b228 HDDS-13067. Container Balancer delete commands should not be sent with an expiration time in the past (apache#8491) 53673c5 HDDS-11244. OmPurgeDirectoriesRequest should clean up File and Directory tables of AOS for deleted snapshot directories (apache#8509) 07f4868 HDDS-13099. ozone admin datanode list ignores --json flag when --id filter is used (apache#8500) 08c0ab8 HDDS-13075. Fix default value in description of container placement policy configs (apache#8511) 58c87a8 HDDS-12177. Set runtime scope where missing (apache#8513) 10c470d HDDS-12817. Add EC block index in the ozone debug replicas chunk-info (apache#8515) 7027ab7 HDDS-13124. Respect config hdds.datanode.use.datanode.hostname when reading from datanode (apache#8518) b8b226c HDDS-12928. datanode min free space configuration (apache#8388) fd3d70c HDDS-13026. KeyDeletingService should also delete RenameEntries (apache#8447) 4c1c6cf HDDS-12714. Create acceptance test framework for debug and repair tools (apache#8510) fff80fc HDDS-13118. Remove duplicate mockito-core dependency from hdds-test-utils (apache#8508) 10d5555 HDDS-13115. Bump awssdk to 2.31.50 (apache#8505) 360d139 HDDS-13017. Fix warnings due to non-test scoped test dependencies (apache#8479) 1db1cca HDDS-13116. Bump jline to 3.30.3 (apache#8504) 322ca93 HDDS-13025. Refactor KeyDeletingService to use ReclaimableKeyFilter (apache#8450) 988b447 HDDS-5287. Document S3 ACL classes (apache#8501) 64bb29d HDDS-12777. Use module-specific name for generated config files (apache#8475) 54ed115 HDDS-9210. Update snapshot chain restore test to incorporate snapshot delete. (apache#8484) 87dfa5a HDDS-13014. Improve PrometheusMetricsSink#normalizeName performance (apache#8438) 7cdc865 HDDS-13100. ozone admin datanode list --json should output a newline at the end (apache#8499) 9cc4194 HDDS-13089. [snapshot] Add an integration test to verify snapshotted data can be read by S3 SDK client (apache#8495) cb9867b HDDS-13065. Refactor SnapshotCache to return AutoCloseSupplier instead of ReferenceCounted (apache#8473) a88ff71 HDDS-10979. Support STANDARD_IA S3 storage class to accept EC replication config (apache#8399) 6ec8f85 HDDS-13080. Improve delete metrics to show number of timeout DN command from SCM (apache#8497) 3bb8858 HDDS-12378. Change default hdds.scm.safemode.min.datanode to 3 (apache#8331) 0171bef HDDS-13073. Set pipeline ID in checksums verifier to avoid cached pipeline with different node (apache#8480) 5c7726a HDDS-11539. OzoneClientCache `@PreDestroy` is never called (apache#8493) a8ed19b HDDS-13031. Implement a Flat Lock resource in OzoneManagerLock (apache#8446) e9e8b30 HDDS-12935. Support unsigned chunked upload with STREAMING-UNSIGNED-PAYLOAD-TRAILER (apache#8366) 7590268 HDDS-13079. Improve logging in DN for delete operation. (apache#8489) 435fe7e HDDS-12870. Fix listObjects corner cases (apache#8307) eb5dabd HDDS-12926. Remove *.tmp.* exclusion in DU (apache#8486) eeb98c7 HDDS-13030. Snapshot Purge should unset deep cleaning flag for next 2 snapshots in the chain (apache#8451) 6bf121e HDDS-13032. Support proper S3OwnerId representation (apache#8478) 5d1b43d HDDS-13076. Refactor OzoneManagerLock class to rename Resource class to LeveledResource (apache#8482) bafe6d9 HDDS-13064. [snapshot] Add test coverage for SnapshotUtils.isBlockLocationInfoSame() (apache#8476) 7035846 HDDS-13040. Add user doc highlighting the difference between Ozone ACL and S3 ACL. (apache#8457) 1825cdf HDDS-13049. Deprecate VolumeName & BucketName in OmKeyPurgeRequest and prevent Key version purge on Block Deletion Failure (apache#8463) 211c76c HDDS-13060. Change NodeManager.addDatanodeCommand(..) to use DatanodeID (apache#8471) f410238 HDDS-13061. Add test for key ACL operations without permission (apache#8472) d1a2f48 HDDS-13057. Increment block delete processed transaction counts regardless of log level (apache#8466) 0cc6fcc HDDS-13043. Replace != with assertNotEquals in TestSCMContainerPlacementRackAware (apache#8470) e1c779a HDDS-13051. Use DatanodeID in server-scm. (apache#8465) 35e1126 HDDS-13042. [snapshot] Add future proofing test cases for unsupported file system API (apache#8458) 619c05d HDDS-13008. Exclude same SST files when calculating full snapdiff (apache#8423) 21b49d3 HDDS-12965. Fix warnings about "used undeclared" dependencies (apache#8468) 8136119 HDDS-13048. Create new module for Recon integration tests (apache#8464) Conflicts: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
…#8388) (cherry picked from commit b8b226c) Conflicts: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeConfiguration.java hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/TestDatanodeConfiguration.java hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestReplicationSupervisor.java hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml hadoop-ozone/dist/src/main/k8s/examples/getting-started/config-configmap.yaml hadoop-ozone/dist/src/main/k8s/examples/minikube/config-configmap.yaml hadoop-ozone/dist/src/main/k8s/examples/ozone-dev/config-configmap.yaml hadoop-ozone/dist/src/main/k8s/examples/ozone-ha/config-configmap.yaml hadoop-ozone/dist/src/main/k8s/examples/ozone/config-configmap.yaml hadoop-ozone/integration-test-recon/src/test/resources/ozone-site.xml hadoop-ozone/integration-test-s3/src/test/resources/ozone-site.xml hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestRefreshVolumeUsageHandler.java
What changes were proposed in this pull request?
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-12928
How was this patch tested?