Support per-project client for GCS repository#131899
Support per-project client for GCS repository#131899elasticsearchmachine merged 19 commits intoelastic:mainfrom
Conversation
|
Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination) |
| for (var projectId : perProjectClientsHolders.keySet()) { | ||
| if (currentProjects.containsKey(projectId) == false) { | ||
| assert ProjectId.DEFAULT.equals(projectId) == false; | ||
| perProjectClientsHolders.remove(projectId); |
There was a problem hiding this comment.
The approach for managing per-project clients is similar to what we did for s3 but simpler. It is simpler because there is no ref-counting for the clients and we don't actively close it when it is not needed. Therefore, we simply remove and replace it when needed.
| void refreshAndClearCacheForClusterClients(Map<String, GoogleCloudStorageClientSettings> clientsSettings) { | ||
| clusterClientsHolder.refreshAndClearCache(clientsSettings); | ||
| } |
There was a problem hiding this comment.
This is used for stateful or cluster level clients to keep the behaviour identical to existing code.
|
|
||
| public GoogleCloudStorageService(boolean isServerless) { | ||
| this.isServerless = isServerless; | ||
| @SuppressWarnings("this-escape") |
There was a problem hiding this comment.
I think we should try to avoid this. You can add a static method to create the instance and then create the client manager.
There was a problem hiding this comment.
This is added because we pass GoogleCloudStorageService#createClient to the clients manager. It's hard to make this method static because it uses other methods that are overridden in tests. Not entirely sure whether this is your suggestion?
Alternatively, I can move the clientsManager inside GoogleCloudStorageService and this problem should go away since they are in one file. What do you think?
There was a problem hiding this comment.
Moving the client manager inside GoogleCloudStorageService seems like a good solution!
| clientCache = clientCache.entrySet() | ||
| .stream() | ||
| .filter(entry -> entry.getKey().equals(repositoryName) == false) | ||
| .collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, java.util.Map.Entry::getValue)); |
There was a problem hiding this comment.
you can probably drop the java.util.
| continue; | ||
| } | ||
| final ProjectSecrets projectSecrets = project.custom(ProjectSecrets.TYPE); | ||
| // Project secrets can be null when node restarts. It may not have any s3 credentials if s3 is not in use. |
| // Skip project clients that have no credentials configured. This should not happen in serverless. | ||
| // But it is safer to skip them and is also a more consistent behaviour with the cases when | ||
| // project secrets are not present. |
There was a problem hiding this comment.
is this supposed to be a transient state? Does it worth a warn log?
There was a problem hiding this comment.
This will be a configuration problem on the CP side. Yes, we should log it. There is one for empty clients settings map after this. But I think we can combine them and continue the processing. Pushed 57de151 which also adds a test. I'll update it similarly for s3 in a separate PR.
pxsalehi
left a comment
There was a problem hiding this comment.
LGTM, pending getting rid of the new @SuppressWarnings("this-escape").
Log a message only when the client settings have actual changes. Relates: elastic#131899
Log a message only when the client settings have actual changes. Relates: #131899
Add IT for GCS per-project clients added in elastic#131899
Add IT for GCS per-project clients added in elastic#131899
Similar to #127631, this PR adds per project client support for GCS repository.