-
Notifications
You must be signed in to change notification settings - Fork 588
HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list #1096
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@nandakumar131 Would you like to take a look at this PR? Thanks. |
087cefa to
370cbb0
Compare
|
Hi @elek Would like to take a look at this PR? |
|
@xiaoyuyao Please take a look at this PR, thanks. |
avijayanhwx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @maobaolong. The approach looks good to me. I have added a few comments. Also, can we add unit test(s) for this feature?
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maobaolong For my knowledge, can you list a few pipeline selection policies that you envision? I am wondering if we are OK with just a List<Pipeline> as an argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@avijayanhwx Thank you for your suggestion, I add a arguments map, for extension purpose, i use Object type as value type.
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
Outdated
Show resolved
Hide resolved
.../java/org/apache/hadoop/hdds/scm/pipeline/choose/algorithms/PipelineChoosePolicyFactory.java
Outdated
Show resolved
Hide resolved
8dfadd9 to
3596e41
Compare
|
@avijayanhwx Thanks for your pervious review and suggestion, I push a new commit and addressed you comments, PTAL. |
elek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks the patch @maobaolong. Overall it looks good to me. I have a few comments inside about some technical details.
| value. | ||
| </description> | ||
| </property> | ||
| <property> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to use the new, Java based configuration API:
https://cwiki.apache.org/confluence/display/HADOOP/Java-based+configuration+API
| { | ||
| "name": "INTERNAL_ERROR", | ||
| "integer": 29 | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proto.lock file change should be excluded. the proto.lock should represent and old state: the last released state to use it as a base for comparison.
Rebase to the latest master and you won't see the changed proto lock files (which makes it easy to add lock files accidentally)
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
Show resolved
Hide resolved
|
@maobaolong There are still some unaddressed comments here.
|
1cb6a2e to
399d574
Compare
|
@avijayanhwx @elek I've addressed all your comments and add some ut, PTAL. |
avijayanhwx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes @maobaolong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 thanks the update
|
@elek Thanks you very much for merge this PR, base this framework, we can create various pipeline choose policies. |
* master: (28 commits) HDDS-4037. Incorrect container numberOfKeys and usedBytes in SCM after key deletion (apache#1295) HDDS-3232. Include the byteman scripts in the distribution tar file (apache#1309) HDDS-4095. Byteman script to debug HCFS performance (apache#1311) HDDS-4057. Failed acceptance test missing from bundle (apache#1283) HDDS-4040. [OFS] BasicRootedOzoneFileSystem to support batchDelete (apache#1286) HDDS-4061. Pending delete blocks are not always included in #BLOCKCOUNT metadata (apache#1288) HDDS-4067. Implement toString for OMTransactionInfo (apache#1300) HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config (apache#1149) HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list (apache#1096) HDDS-3979. Make bufferSize configurable for stream copy (apache#1212) HDDS-4048. Show more information while SCM version info mismatch (apache#1278) HDDS-4078. Use HDDS InterfaceAudience/Stability annotations (apache#1302) HDDS-4034. Add Unit Test for HadoopNestedDirGenerator. (apache#1266) HDDS-4076. Translate CSI.md into Chinese (apache#1299) HDDS-4046. Extensible subcommands for CLI applications (apache#1276) HDDS-4051. Remove whitelist/blacklist terminology from Ozone (apache#1306) HDDS-4055. Cleanup GitHub workflow (apache#1282) HDDS-4042. Update documentation for the GA release (apache#1269) HDDS-4066. Add core-site.xml to intellij configuration (apache#1292) HDDS-4073. Remove leftover robot.robot (apache#1297) ...
What changes were proposed in this pull request?
With this policy driven mode, we can develop various pipeline choosing policy to satisfy complex production environment.
Now the default policy is Random policy.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-3833
How was this patch tested?
CI checks.