Skip to content

Conversation

@ChenSammi
Copy link
Contributor

https://issues.apache.org/jira/browse/HDDS-4519

Currently s3g will return "internal server error" when user doesn't have the permission to access the resource.

This task is to return HTTP 403 FORBIDDEN in this case.

… when user doesn't have the resouce permission.
@ChenSammi
Copy link
Contributor Author

Hi, @bharatviswa504 and @elek, would you help to review the patch at your convenient time?

Copy link
Member

@elek elek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 Thanks the patch, @ChenSammi

My first thought was to suggest using ExceptionMapper<OMException> instead of adding this exception handling manually to all places.

Then I realized the problem: you need to add the context (bucket name, key name) to the results which is possible in this way. This patch does it well --> +1

/brainstorming on

In the future we may add the context information (bucket, key, volume) to the OMException which would make it possible to handle all the exceptions with less code.

/brainstorming off

Also, an additional (slightly unrelated) note: if OzoneClient itself couldn't be created (due to malformed auth header) still we throw 500 AFAIK. (Not related to this patch, just sharing my sadness ;-) ). Implementation of the design in #1562 may help on that one.

(Not committing yet, to give a chance to @bharatviswa504 to check this)

@ChenSammi
Copy link
Contributor Author

+1 Thanks the patch, @ChenSammi

My first thought was to suggest using ExceptionMapper<OMException> instead of adding this exception handling manually to all places.

Then I realized the problem: you need to add the context (bucket name, key name) to the results which is possible in this way. This patch does it well --> +1

/brainstorming on

In the future we may add the context information (bucket, key, volume) to the OMException which would make it possible to handle all the exceptions with less code.

/brainstorming off

Also, an additional (slightly unrelated) note: if OzoneClient itself couldn't be created (due to malformed auth header) still we throw 500 AFAIK. (Not related to this patch, just sharing my sadness ;-) ). Implementation of the design in #1562 may help on that one.

(Not committing yet, to give a chance to @bharatviswa504 to check this)

Thanks @elek for the code review.
Yes. currently there is no central place to handle common exceptions, such as permission deny, not leader etc. We can improve this part later by refactoring current code structure.

currentIterator =
getNextListOfKeys(currentValue.getName()).iterator();
} catch (IOException e) {
throw new RuntimeException(e);
Copy link
Contributor

@bharatviswa504 bharatviswa504 Dec 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason for throwing RuntimeException here?

Copy link
Contributor

@bharatviswa504 bharatviswa504 Dec 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the current code is doing the same. Do you think we need to change here to handle error? (Not related to this PR BTW). I see the reason as hasNext() does not throw any exception.

"InternalError", "We encountered an internal error. Please try again.",
HTTP_SERVER_ERROR);

public static final OS3Exception PERMISSION_DENIED = new OS3Exception(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think, we want to change it to AccessDenied, to be closer to S3

https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#RESTErrorResponses

AccessDenied 403

https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/ Even though we don't have S3 Acls, Ozone Acls causing this, but to be close to S3 I think error code having same might be better here IMHO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for return 403

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I will change the description to "AccessDenied".

Although where are multiple Error Codes map to "HTTP 403 Forbidden" , "AccessDenied" is the most appropriate description for this ACL case.
https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#RESTErrorResponses

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, I have one comment on the error code. I agree with @elek here we should have a better way to handle errors instead of updating each method which is a tedious task, but that's how the current code is.

Copy link
Contributor

@xiaoyuyao xiaoyuyao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just comments inline...

"InternalError", "We encountered an internal error. Please try again.",
HTTP_SERVER_ERROR);

public static final OS3Exception PERMISSION_DENIED = new OS3Exception(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for return 403

if (ex.getResult() == ResultCodes.NO_SUCH_MULTIPART_UPLOAD_ERROR) {
throw S3ErrorTable.newError(NO_SUCH_UPLOAD,
uploadID);
} else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to have a common helper that does the OM result code => s3 error translation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In long term,it's good to have such a mapping table.
But there are many details need further discussion, for example, a error response has HTTP error code and s3 error code. Multiple s3 error codse will map to the same HTTP error code.
Some OM result codes, like QUOTA_EXCEEDED, I quickly go through all HTTP codes, didn't find a perfect match code.
Other OM result codes, such as NOT_SUPPORTED_OPERATION, is just translated to "interval error" in s3g.

@ChenSammi
Copy link
Contributor Author

Overall LGTM, I have one comment on the error code. I agree with @elek here we should have a better way to handle errors instead of updating each method which is a tedious task, but that's how the current code is.

Agree, we need the a more generic way to handle exceptions in s3g. Besides this ACL triggerred access deny failure, there is Quota triggered write failure, which I haven't find a proper s3 errorcode and HTTP code to return to use so far.

@ChenSammi
Copy link
Contributor Author

ChenSammi commented Dec 10, 2020

@bharatviswa504 @xiaoyuyao , uploaded a new commit to address the comments. Would you take another look at your convenient time?

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM

@bharatviswa504
Copy link
Contributor

Besides this ACL triggerred access deny failure, there is Quota triggered write failure, which I haven't find a proper s3 >errorcode and HTTP code to return to use so far.

Does S3 Buckets support quota feature, this problem will occur only when buckets created by shell/ozone java interface used by S3 and have quota supported on these buckets we shall have this issue. If so, this is an internal ozone error, and might be we should construct errorcode from ozone internal code, but S3 tools will not understand this. Here in this kind of scenario, ozone and S3 semantics are combined, which will help to understand for ozone S3 users, but not for S3 tools.

@ChenSammi
Copy link
Contributor Author

this

We have such kind of use cases. You are right, S3 doesn't support quota feature. It seems the best we can do is return internal error code along with the quota exceed message to user, so we can tell from the error message that which is true failure reason.

@ChenSammi ChenSammi merged commit 0957975 into apache:master Dec 15, 2020
@ChenSammi
Copy link
Contributor Author

Thanks @bharatviswa504 and @xiaoyuyao for the code review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants