-
Notifications
You must be signed in to change notification settings - Fork 588
HDDS-4194. Create a script to check AWS S3 compatibility #1383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
adoroszlai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @elek for working on this. LGTM, just minor comments about the script.
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
| # shellcheck disable=SC2086 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is lack of quotes (flagged by SC2086) necessary for the script to work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The situation is the opposite: the quotes are not necessary to make it safe. None of the environment variables are file names and none of them can have spaces. No reason to add quotes.
hadoop-ozone/dist/src/main/smoketest/s3/s3_compatbility_check.sh
Outdated
Show resolved
Hide resolved
adoroszlai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @elek for updating the patch.
* master: HDDS-4102. Normalize Keypath for lookupKey. (apache#1328) HDDS-4263. ReplicatiomManager shouldn't consider origin node Id for CLOSED containers. (apache#1438) HDDS-4282. Improve the emptyDir syntax (apache#1450) HDDS-4194. Create a script to check AWS S3 compatibility (apache#1383) HDDS-4270. Add more reusable byteman scripts to debug ofs/o3fs performance (apache#1443) HDDS-2660. Create insight point for datanode container protocol (apache#1272) HDDS-3297. Enable TestOzoneClientKeyGenerator. (apache#1442) HDDS-4324. Add important comment to ListVolumes logic (apache#1417) HDDS-4236. Move "Om*Codec.java" to new project hadoop-ozone/interface-storage (apache#1424) HDDS-4254. Bucket space: add usedBytes and update it when create and delete key. (apache#1431) HDDS-2766. security/SecuringDataNodes.md (apache#1175) HDDS-4206. Attempt pipeline creation more frequently in acceptance tests (apache#1389) HDDS-4233. Interrupted exeception printed out from DatanodeStateMachine (apache#1416) HDDS-3947: Sort DNs for client when the key is a file for #getFileStatus #listStatus APIs (apache#1385) HDDS-3102. ozone getconf command should use the GenericCli parent class (apache#1410) HDDS-3981. Add more debug level log to XceiverClientGrpc for debug purpose (apache#1214) HDDS-4255. Remove unused Ant and Jdiff dependency versions (apache#1433) HDDS-4247. Fixed log4j usage in some places (apache#1426) HDDS-4241. Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI. (apache#1422)
* HDDS-4122-remove-code-consolidation: (21 commits) Restore files that had deduplicated code from master Revert other delete request/response files back to their original states on master HDDS-4102. Normalize Keypath for lookupKey. (apache#1328) HDDS-4263. ReplicatiomManager shouldn't consider origin node Id for CLOSED containers. (apache#1438) HDDS-4282. Improve the emptyDir syntax (apache#1450) HDDS-4194. Create a script to check AWS S3 compatibility (apache#1383) HDDS-4270. Add more reusable byteman scripts to debug ofs/o3fs performance (apache#1443) HDDS-2660. Create insight point for datanode container protocol (apache#1272) HDDS-3297. Enable TestOzoneClientKeyGenerator. (apache#1442) HDDS-4324. Add important comment to ListVolumes logic (apache#1417) HDDS-4236. Move "Om*Codec.java" to new project hadoop-ozone/interface-storage (apache#1424) HDDS-4254. Bucket space: add usedBytes and update it when create and delete key. (apache#1431) HDDS-2766. security/SecuringDataNodes.md (apache#1175) HDDS-4206. Attempt pipeline creation more frequently in acceptance tests (apache#1389) HDDS-4233. Interrupted exeception printed out from DatanodeStateMachine (apache#1416) HDDS-3947: Sort DNs for client when the key is a file for #getFileStatus #listStatus APIs (apache#1385) HDDS-3102. ozone getconf command should use the GenericCli parent class (apache#1410) HDDS-3981. Add more debug level log to XceiverClientGrpc for debug purpose (apache#1214) HDDS-4255. Remove unused Ant and Jdiff dependency versions (apache#1433) HDDS-4247. Fixed log4j usage in some places (apache#1426) ...
What changes were proposed in this pull request?
Ozone S3G implements the REST interface of AWS S3 protocol. Our robot test based scripts check if it's possible to use Ozone S3 with the AWS client tool.
But occasionally we should check if our robot test definitions are valid: robot tests should be executed with using real AWS endpoint and bucket(s) and all the test cases should be passed.
This patch provides a simple shell script to make this cross-check easier.
Implementation
Please note that for reliable testing we shouldn't use the same keys for tests. I added a random prefix to the used keys to make sure that tests don't use any existing data from the bucket.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-4194
How was this patch tested?
Executed the new script with setting environment variables to REAL aws buckets.
Today it's expected to be fail with MPU ranged copy. It's a bug in our implementation and will be fixed by https://issues.apache.org/jira/browse/HDDS-4193