forked from spinnaker/clouddriver
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync the fork #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…#2602) The new `minimalOnDemand` feature flag has been active for a week or so and looks good. This removes the legacy behavior.
…cycles (#2599) Given that the cardinality of ON_DEMAND is quite small, it is _possibly_ more efficient to fetch the ON_DEMAND records for a given account + region and perform an in-memory filter. Previously we were doing an mget with many thousands of keys (one per server group) when only a small number (dozens?) would ever exist in the set.
Effectively this PR will publish `v4` as `v3`.
persists the decision to not create an applicationDefaultSecurityGroup in titus attribute
Getting this out of the way as I may have a subsequent need to make use of a `default` interface method.
This PR introduces a new administrative api supporting reconcilation of entity tags against the current cached set of server groups. Entity Tags that reference a non-existent server group will be removed from elastic search. A subsequent effort will provide a means to perform reconciliation against `front50`. The API is exposed directly on `clouddriver`: ``` curl -X POST localhost:8101/admin/tags/reconcile?cloudProvider=aws&account=test®ion=us-west-2 ```
Avoid a circular dependency between `ElasticSearchEntityTagsProvider` and `ElasticSearchEntityTagsReconciler`. The latter has a dependency on `CacheView` which breaks when the `launchFailureNotificationAgentProvider` is enabled.
Within CatsSearchProvider only searches against the specific caches for providers that host a caching agent of the desired type. Adds an existingIds to the Cache interface to verify that key hits are still existing cache items (previously would fully load each object to verify existance) Better search globbing if the SearchableProvider defines a KeyParser. Makes TitusCachingProvider a SearchableProvider.
removes the existingIdentifiers call on the in-memory instance cache as this cache is loaded per clouddriver instance it can balloon out to a large number of exists calls
adds a gitlab artifact provider. follows most of the same logic in the github provider. relies on the gitlab raw file api.
this commit fixes two issues with the s3 artifact provider. 1, it adds support for configuring an s3 region. this is inline with how we already handle it in front50, allowing for usage of buckets in separate regions. also, when using `apiEndpoint` we not set `pathStyleAccessEnabled` which enables support for things like Minio to act as a provider.
This was broken in recent change.
in tcpSocket liveness/readinessProbe
…2629) When deploying updates to a HorizontalPodAutoscaler, versioning them will create multiple instances of the HPA, all of which still try to target/autoscale the scaleTargetRef. Since I don't believe there are other resource types that refer to a HPA, it is not clear that this needs to be versioned like ConfigMaps or Secrets.
When multiple caching agents handle an on-demand refresh, the return status is currently based only on the result of the last agent. Instead we should return a PENDING result if *any* of the caching agents find results, rather than only if that last one does.
This is a significant performance improvement for large types.
Similar to #2666 but this time behind a `caching.redis.treatRelationshipsAsSet` flag.
Titus per-account level flag `splitCachingEnabled: true` to turn on. Defaults to false (off).
This appears to have made some difference locally.
…2681) Also updated the `ClusteredAgentScheduler` to compare `getAgentType()` rather than `agent.class.simpleName` when determining whether the agent should be scheduled.
`delegated.internalPool` vs `internalPool` All instances of `JedisPool` should be `InstrumentedJedisPool` so should be no need to worry about the status of `internalPool`.
* fix(titus): support scenario where there are no targetgroupservergroupproviders * fix(titus): make getting server groups for target groups more resiliant to null data
LUA call was returning a Long not an Integer, deleteLock was always returning false. Previously we ignored the value, now we use it to drive a logging message.
Add support for NFS volumes.
Added to make cleanup of versioned resources simpler
…ups in targetGroupServerGroupProvider (#2689)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We prefer small, well tested pull requests.
Please refer to Contributing to Spinnaker.
When filling out a pull request, please consider the following:
Note that we are unlikely to accept pull requests that add features without prior discussion. The best way to propose a feature is to open an issue first and discuss your ideas there before implementing them.