-
Notifications
You must be signed in to change notification settings - Fork 588
HDDS-6942. Ozone Buckets/Objects created via S3 should not allow group access #3553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
Outdated
Show resolved
Hide resolved
Ozone vol/bucket/objects created via S3 should not allow read access for users in same group
adoroszlai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be better to set this config from OzoneClientCache before any client is instantiated.
ozone/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientCache.java
Lines 58 to 92 in a8808d1
| private OzoneClientCache(OzoneConfiguration ozoneConfiguration) | |
| throws IOException { | |
| // Set the expected OM version if not set via config. | |
| ozoneConfiguration.setIfUnset(OZONE_CLIENT_REQUIRED_OM_VERSION_MIN_KEY, | |
| OZONE_CLIENT_REQUIRED_OM_VERSION_MIN_DEFAULT); | |
| String omServiceID = OmUtils.getOzoneManagerServiceId(ozoneConfiguration); | |
| secConfig = new SecurityConfig(ozoneConfiguration); | |
| client = null; | |
| try { | |
| if (secConfig.isGrpcTlsEnabled()) { | |
| if (ozoneConfiguration | |
| .get(OZONE_OM_TRANSPORT_CLASS, | |
| OZONE_OM_TRANSPORT_CLASS_DEFAULT) != | |
| OZONE_OM_TRANSPORT_CLASS_DEFAULT) { | |
| // Grpc transport selected | |
| // need to get certificate for TLS through | |
| // hadoop rpc first via ServiceInfo | |
| setCertificate(omServiceID, | |
| ozoneConfiguration); | |
| } | |
| } | |
| if (omServiceID == null) { | |
| client = OzoneClientFactory.getRpcClient(ozoneConfiguration); | |
| } else { | |
| // As in HA case, we need to pass om service ID. | |
| client = OzoneClientFactory.getRpcClient(omServiceID, | |
| ozoneConfiguration); | |
| } | |
| } catch (IOException e) { | |
| LOG.warn("cannot create OzoneClient", e); | |
| throw e; | |
| } | |
| // S3 Gateway should always set the S3 Auth. | |
| ozoneConfiguration.setBoolean(S3Auth.S3_AUTH_CHECK, true); | |
| } |
I think decision for what the ACL config should be for entities created should reside closer to the request processing. The Client and it's cache should avoid deciding defaults for the requests. It is much easier to evaluate all the outcomes of an API call if the choices in defaults are where the processing of the API is done. I would prefer to leave it here. It makes sense to evaluate connection level setting in the Client Cache (TLS etc) but should a bucket have read access to all in the same group is really a S3 API level decision and |
|
The CI failures seem unrelated. |
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
Outdated
Show resolved
Hide resolved
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
Outdated
Show resolved
Hide resolved
I was concerned that the setting may not be effective, as the client may already be created when endpoint's Can you please add an assertion for the expected ACL in S3 smoketests? |
|
Thanks @kerneltime for adding the test. Seems like in secure case (in https://github.com/apache/ozone/runs/7104193656#step:5:599 |
|
I think I know why some tests pass and this one fails. but the config is only applicable for OM which seems an odd mismatch. If the user used does not have any groups when queried by OM it skips adding the group permissions. The robot tests for non-secure clusters default to a random user who has no groups and the test passes. |
4c49338 to
6671177
Compare
adoroszlai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @kerneltime for iterating on this patch.
* master: (46 commits) HDDS-6901. Configure HDDS volume reserved as percentage of the volume space. (apache#3532) HDDS-6978. EC: Cleanup RECOVERING container on DN restarts (apache#3585) HDDS-6982. EC: Attempt to cleanup the RECOVERING container when reconstruction failed at coordinator. (apache#3583) HDDS-6968. Addendum: [Multi-Tenant] Fix USER_MISMATCH error even on correct user. (apache#3578) HDDS-6794. EC: Analyze and add putBlock even on non writing node in the case of partial single stripe. (apache#3514) HDDS-6900. Propagate TimeoutException for all SCM HA Ratis calls. (apache#3564) HDDS-6938. handle NPE when removing prefixAcl (apache#3568) HDDS-6960. EC: Implement the Over-replication Handler (apache#3572) HDDS-6979. Remove unused plexus dependency declaration (apache#3579) HDDS-6957. EC: ReplicationManager - priortise under replicated containers (apache#3574) HDDS-6723. Close Rocks objects properly in OzoneManager (apache#3400) HDDS-6942. Ozone Buckets/Objects created via S3 should not allow group access (apache#3553) HDDS-6965. Increase timeout for basic check (apache#3563) HDDS-6969. Add link to compose directory in smoketest README (apache#3567) HDDS-6970. EC: Ensure DatanodeAdminMonitor can handle EC containers during decommission (apache#3573) HDDS-6977. EC: Remove references to ContainerReplicaPendingOps in TestECContainerReplicaCount (apache#3575) HDDS-6217. Cleanup XceiverClientGrpc TODOs, and document how the client works and should be used. (apache#3012) HDDS-6773. Cleanup TestRDBTableStore (apache#3434) - fix checkstyle HDDS-6773. Cleanup TestRDBTableStore (apache#3434) HDDS-6676. KeyValueContainerData#getProtoBufMessage() should set block count (apache#3371) ... Conflicts: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/upgrade/SCMUpgradeFinalizer.java
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
…low group access (apache#3553)" This reverts commit c5e3745.
Buckets created via S3 should not allow read access for usersin same group
What changes were proposed in this pull request?
Buckets created via S3 should not allow read access for users
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-6942
How was this patch tested?
Bucket created via S3 vs ozone sh