Add a new API to create RCS specific API keys#95714
Add a new API to create RCS specific API keys#95714elasticsearchmachine merged 17 commits intoelastic:mainfrom
Conversation
|
Pinging @elastic/es-security (Team:Security) |
| final long daysBetween = ChronoUnit.DAYS.between(start, expiration); | ||
| assertThat(daysBetween, is(7L)); | ||
|
|
||
| assertThat(getApiKeyDocument(response.getId()).get("type"), equalTo("rest")); |
There was a problem hiding this comment.
I chose to avoid changes to ApiKeyIntegTests in this PR because:
- The new API key type is not yet reflected in the Get/Query response so that some of the common verification code here won't work
- The new API keys are not meant to be updated using existing APIs which are tested heavily here
- The new type API keys are not meant to authenticate normally which is necessary for some of the tests here.
I plan to take another look at this test class to see whether it can be factored into testing both type of API keys when all API changes are in place. For now I am keeping tests in other places which also makes them more explicit. Also, this test class is pretty terrible to work with (or review) ...
There was a problem hiding this comment.
++ on not adding tests here for now, and agreed -- this class has gotten unruly.
...elasticsearch/xpack/core/security/action/apikey/CrossClusterApiKeyRoleDescriptorBuilder.java
Show resolved
Hide resolved
| String[] privileges = null; | ||
| String[] privileges = predefinedPrivileges; |
There was a problem hiding this comment.
This is my take of minimal changes to reuse existing parsing code. Please let me know if this works for you. Thanks!
There was a problem hiding this comment.
I've stared at this for a while, and I think this is the best way. That parsing code is pretty gnarly, but decidedly not something to clean up in this PR.
n1v0lg
left a comment
There was a problem hiding this comment.
Sorry I didn't get through all of the main code yet. Looking great so far, figured I'd leave the preliminary comments that I have.
| String[] privileges = null; | ||
| String[] privileges = predefinedPrivileges; |
There was a problem hiding this comment.
I've stared at this for a while, and I think this is the best way. That parsing code is pretty gnarly, but decidedly not something to clean up in this PR.
| String[] names = null; | ||
| BytesReference query = null; | ||
| String[] privileges = null; | ||
| String[] privileges = predefinedPrivileges; |
There was a problem hiding this comment.
Nit: we could assert that privileges is null or not empty, to fail early.
There was a problem hiding this comment.
The IndicesPrivileges.Builder does check for null or empty privileges. That said, an extra assertion does not hurt either. So I added one.
...elasticsearch/xpack/core/security/action/apikey/CrossClusterApiKeyRoleDescriptorBuilder.java
Show resolved
Hide resolved
...elasticsearch/xpack/core/security/action/apikey/CrossClusterApiKeyRoleDescriptorBuilder.java
Show resolved
Hide resolved
|
|
||
| @Override | ||
| public ActionRequestValidationException validate() { | ||
| if (Assertions.ENABLED && type == ApiKey.Type.CROSS_CLUSTER) { |
There was a problem hiding this comment.
Wondering if it could make sense to use inheritance here: have an abstract base class and a CreateRestApiKeyRequest and a CreateCrossClusterAccessApiKeyRequest that override the type and validation. A lot of the validation steps below (like missing name) don't really make much sense for cross cluster key request.
Not a strong suggestion, just an intuition that although these are almost identical requests, they're still quite semantically distinct. LMKWYT.
There was a problem hiding this comment.
I didn't do it because it is a lot cascading changes. But it will help to make the code cleaner. I'll give it a go.
A lot of the validation steps below (like missing name) don't really make much sense for cross cluster key request.
The name validation is still needed because the API takes the API Key's name from the user.
There was a problem hiding this comment.
This is done. I kept the existing CreateApiKeyRequest class name unchanged and created the base class AbstractCreateApiKeyRequest to avoid large cascading changes. The class has a builder CreateApiKeyRequestBuilder which would need to be renamed as well. I think we can leave the renames to a future pure-refactor PR so that this one is a bit easier to work with.
There was a problem hiding this comment.
The name validation is still needed because the API takes the API Key's name from the user.
My bad, you're right. I got the API key name mixed up with fixed role name.
...elasticsearch/xpack/core/security/action/apikey/CrossClusterApiKeyRoleDescriptorBuilder.java
Show resolved
Hide resolved
|
@n1v0lg This is ready for another round. I took your suggestion to create class hierarchy for the new Request. In the process, it also made me realise that I forgot to plan for relevant audit changes. I will work on them once both new APIs are in place. Thanks! |
| created, | ||
| expiration, | ||
| request.getRoleDescriptors(), | ||
| request.getType(), |
There was a problem hiding this comment.
Since API key type and request class are currently 1:1, the getType() call here (and a couple other places) can be replaced by something like
request instanceof CreateCrossClusterApiKeyRequest ? ApiKey.Type.CROSS_CLUSTER : ApiKey.Type.RESTThis means we can possibly drop type from the request classes altogether since the class type itself already indicates it. Without the type, the two request classes have essentially no difference other than the class type itself. This makes me ponder again on subclassing v.s. just adding an extra type field into the existing CreateApiKeyRequest because the class hierarchy seems to be rather superficial. For updating API keys, if we also choose to subclass, it probably means two more subclasses, one for single update and one for bulk update (even though cross-cluster does not support bulk update with API, the code always uses bulk updating behind the scene). Given these subclasses have little difference, is the cost benefit here worth it? Maybe having the class hierarchy is more flexible and expandable in future? Or maybe you have a different subclass implementation in mind? Please let me know. Thanks!
There was a problem hiding this comment.
For sure agreed that the delta between the classes is small and that the hierarchy forces some overhead. We still have some difference in behavior: the validation is different, and I'm also suggesting a difference in serialization behavior in another comment.
My other motivation for suggesting inheritance is that it creates a stronger conceptual separation between the two. My mental model for these is that they are two conceptually separate things as opposed to one thing that's differently parametrized.
That said, I took a look at the update requests and I think it could get a little convoluted to have a class hierarchy there. We already have an abstract base class. We could make another split and introduce a AbstractBulkUpdate class with two sub-classes. The REST layer parser could then do the work on converting a conceptually single-target request into a bulk one. That feels like fairly heavy machinery for what we're trying to accomplish though.
I don't have a strong preference here one way or another. I like the conceptual separation we get with the hierarchy but agree that it feels a little heavy-handed, esp. considering the update routes.
If we go with the hierarchy approach, I agree that it makes sense to get rid of the getType method, as it's a source of redundancy 👍
There was a problem hiding this comment.
Thanks! I'll stick to the subclass approach. There is some more work to achieve this for updating. But it is not unmanageable.
If we go with the hierarchy approach, I agree that it makes sense to get rid of the getType method, as it's a source of redundancy
For now, I kept the getType method because we otherwise need a translation logic outside of the Request classes. For now it can be a ternary statement. In future, it might need to be a lookup map. Having it as part of the request still helps.
n1v0lg
left a comment
There was a problem hiding this comment.
LGTM 🚀 I have some smaller questions/comments but nothing that would warrant another review pass.
...ck/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/RoleDescriptor.java
Outdated
Show resolved
Hide resolved
| final long daysBetween = ChronoUnit.DAYS.between(start, expiration); | ||
| assertThat(daysBetween, is(7L)); | ||
|
|
||
| assertThat(getApiKeyDocument(response.getId()).get("type"), equalTo("rest")); |
There was a problem hiding this comment.
++ on not adding tests here for now, and agreed -- this class has gotten unruly.
| if (authentication == null) { | ||
| listener.onFailure(new IllegalStateException("authentication is required")); | ||
| } else { | ||
| apiKeyService.createApiKey(authentication, request, Set.of(), listener); |
There was a problem hiding this comment.
Sanity check: we're not blocking derived API key creation here because of the elevated privileges required to interact with this action to begin with, correct?
There was a problem hiding this comment.
Yes since the intersection model does not apply here, creating new API keys should see no difference for regular user or API keys. That said, there is a problem when it comes to update the API keys because the API keys won't be updatable if they are initially created by an API key. A derived API key effectively has no owner (#75205). But I think this is a separate bug about having a clear definition of API key's identity since the problem exists today, i.e. once an derived key is created, it is not updatable.
| } | ||
|
|
||
| public void testCreateCrossClusterApiKey() throws IOException { | ||
| final XContentParser parser = jsonXContent.createParser(XContentParserConfiguration.EMPTY, """ |
There was a problem hiding this comment.
Sanity check: do we need a feature flag check here?
There was a problem hiding this comment.
Thanks for catching it. We definitely need the check here.
| if (authentication == null) { | ||
| listener.onFailure(new IllegalStateException("authentication is required")); | ||
| } else { | ||
| apiKeyService.createApiKey(authentication, request, Set.of(), listener); |
There was a problem hiding this comment.
A hacky, half-baked thought (feel free to dismiss without comment): if we pass Set.copyOf(request.getRoleDescriptors()) here, I think everything downstream could work out in terms of privileges without extra changes, since technically, limited-by is set but it's identical to the assigned descriptors, so it doesn't limit anything.
The Get and Query APIs would admittedly look confusing, and we'd store and send redundant data... Like I said, absolutely feel free to ignore.
There was a problem hiding this comment.
I actually started with this approach in the initial PoC because I also thought it had the advantage to reuse existing logic. But I later decided to move away from it because overall I felt it could be exposing implementation details. Conceptually these API keys are without constraints, not "constrained by itself". If we show the limited-by in Get/Query result, it is exposing an implementation choice. If we don't show it, it is questionable why store it in the 1st place. In addition, though the downstream code can work as is, I am hesitate to do so because it is less effecient for both runtime check and caching (limited-by and assigned roles are cached separately). If we end up handling it specially after checking that they are identical, this again feels we should not store it in the first place.
| "replication": [ | ||
| { | ||
| "names": [ "archive" ], | ||
| "allow_restricted_indices": true |
There was a problem hiding this comment.
I haven't dug through the docs and my memory fails me, but do we support replicating restricted indices?
There was a problem hiding this comment.
No, it is not officially supported, epsecially not as a DR measure. But we don't actively block it either, i.e. it is possible to configure such replication. So I don't see a strong reason to actively block it here.
| } | ||
|
|
||
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { |
There was a problem hiding this comment.
I'm wondering if the cross cluster class should override this and throw if the target version is too old (similar to what we do we e.g., PutRoleRequest). This should never happen in practice so I don't feel strongly.
There was a problem hiding this comment.
No because we have a new action here. Since the action won't be available in an old node, the code will fail before it even attempts to decode the request. It's like when we add any other new TransportAction, we don't need any BWC handling for its request class.
Strictly speaking, the new CreateCrossApiKeyRequest can override this method to drop existing checks for old verions, e.g.:
if (out.getTransportVersion().onOrAfter(TransportVersion.V_7_5_0)) {
out.writeOptionalString(name);
} else {
out.writeString(name);
}But that means duplicating a bunch code (for both read and write) in subclasses. So I didn't do it since the benefit is rather marginal.
There was a problem hiding this comment.
But that means duplicating a bunch code (for both read and write) in subclasses. So I didn't do it since the benefit is rather marginal.
On a second thought, I decided to have overridden writeTo and corresponding constructor for the new Request and drop these obsolete version checks. Since we are introducing a new class, might as well takes more advantage of it.
| created, | ||
| expiration, | ||
| request.getRoleDescriptors(), | ||
| request.getType(), |
There was a problem hiding this comment.
For sure agreed that the delta between the classes is small and that the hierarchy forces some overhead. We still have some difference in behavior: the validation is different, and I'm also suggesting a difference in serialization behavior in another comment.
My other motivation for suggesting inheritance is that it creates a stronger conceptual separation between the two. My mental model for these is that they are two conceptually separate things as opposed to one thing that's differently parametrized.
That said, I took a look at the update requests and I think it could get a little convoluted to have a class hierarchy there. We already have an abstract base class. We could make another split and introduce a AbstractBulkUpdate class with two sub-classes. The REST layer parser could then do the work on converting a conceptually single-target request into a bulk one. That feels like fairly heavy machinery for what we're trying to accomplish though.
I don't have a strong preference here one way or another. I like the conceptual separation we get with the hierarchy but agree that it feels a little heavy-handed, esp. considering the update routes.
If we go with the hierarchy approach, I agree that it makes sense to get rid of the getType method, as it's a source of redundancy 👍
The PR adds enforcement for API key type at authentication time. Concretely, new cross-cluster API keys (#95714) can only be used on the dedicated remote cluster interface and the existing (rest) API key must not be used for new remote cluster communication. To make cross-cluster API keys actually usable after authentication, the PR also adds support for resolving their roles.
This PR makes the Get/Query API key APIs to return the new API key type information in their responses. For cross-cluster type API keys, the limited-by field is not shown because they do not use the limited-by model as existing API keys. Relates: elastic#95714
This PR makes the Get/Query API key APIs to return the new API key type information in their responses. For cross-cluster type API keys, the limited-by field is not shown because they do not use the limited-by model as existing API keys. Relates: #95714
This PR adds a new endpoint to update RCS specific API keys. It works similarly to the existing UpdateApiKey API except: * It requires manage_security permission * It does not permit empty request body because it does not use limited-by model * It takes the simplified "access" payload as the CreateCrossClusterApiKey API Relates: #95714
This PR adds REST spec files and YAML tests for the new create and update cross-cluster API key APIs. Relates: elastic#95714, elastic#96085
This PR adds API doc pages for the new create and update cross-cluster API key APIs. Relates: elastic#95714, elastic#96085
This PR actively blocks creating cross-cluster API keys with another API key to avoid the issue of derived API key ownership. Relates: #95714
With the new cross-cluster API keys (elastic#95714), RCS 2.0 is more tightly controlled. This removes the need for the action allowlist on the server side because cross-cluster API keys (1) require manage_security to create and (2) are specifized to give out only cross-cluster operation related privileges. Hence this PR removes the allowlist.
With the new cross-cluster API keys (#95714), RCS 2.0 is more tightly controlled. This removes the need for the action allowlist on the server side because cross-cluster API keys (1) require manage_security to create and (2) are specifized to give out only cross-cluster operation related privileges. Hence this PR removes the allowlist.
This PR adds a new endpoint to create RCS specific API keys. Unlike the existing CreateApiKey endpoint, the new one takes a different and simplified payload that concentrates on setting up CCS or CCR. An sample request is like the follows:
Future work will ensure