Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync the fork #1

Merged
merged 90 commits into from
Jun 5, 2018
Merged

sync the fork #1

merged 90 commits into from
Jun 5, 2018

Conversation

guoyongzhang
Copy link
Collaborator

We prefer small, well tested pull requests.

Please refer to Contributing to Spinnaker.

When filling out a pull request, please consider the following:

  • Follow the commit message conventions found here.
  • Provide a descriptive summary for your changes.
  • If it fixes a bug or resolves a feature request, be sure to link to that issue.
  • Add inline code comments to changes that might not be obvious.
  • Squash your commits as you keep adding changes.
  • Add a comment to @spinnaker/reviewers for review if your issue has been outstanding for more than 3 days.

Note that we are unlikely to accept pull requests that add features without prior discussion. The best way to propose a feature is to open an issue first and discuss your ideas there before implementing them.

PerGon and others added 30 commits May 4, 2018 09:09
…#2602)

The new `minimalOnDemand` feature flag has been active for a week or so
and looks good.

This removes the legacy behavior.
…cycles (#2599)

Given that the cardinality of ON_DEMAND is quite small, it is _possibly_
more efficient to fetch the ON_DEMAND records for a given account +
region and perform an in-memory filter.

Previously we were doing an mget with many thousands of keys (one per
server group) when only a small number (dozens?) would ever exist in the
set.
Effectively this PR will publish `v4` as `v3`.
persists the decision to not create an applicationDefaultSecurityGroup in titus attribute
Getting this out of the way as I may have a subsequent need to make use
of a `default` interface method.
This PR introduces a new administrative api supporting reconcilation of
entity tags against the current cached set of server groups.

Entity Tags that reference a non-existent server group will be removed
from elastic search.

A subsequent effort will provide a means to perform reconciliation
against `front50`.

The API is exposed directly on `clouddriver`:

```
curl -X POST localhost:8101/admin/tags/reconcile?cloudProvider=aws&account=test&region=us-west-2
```
Avoid a circular dependency between `ElasticSearchEntityTagsProvider`
and `ElasticSearchEntityTagsReconciler`.

The latter has a dependency on `CacheView` which breaks when the
`launchFailureNotificationAgentProvider` is enabled.
Within CatsSearchProvider only searches against
the specific caches for providers that host a
caching agent of the desired type.

Adds an existingIds to the Cache interface to
verify that key hits are still existing cache
items (previously would fully load each object
to verify existance)

Better search globbing if the SearchableProvider
defines a KeyParser.

Makes TitusCachingProvider a SearchableProvider.
removes the existingIdentifiers call on the in-memory instance cache

as this cache is loaded per clouddriver instance it can balloon out to a
large number of exists calls
adds a gitlab artifact provider. follows most of the same logic in the
github provider. relies on the gitlab raw file api.
this commit fixes two issues with the s3 artifact provider. 1, it adds
support for configuring an s3 region. this is inline with how we already
handle it in front50, allowing for usage of buckets in separate regions.
also, when using `apiEndpoint` we not set `pathStyleAccessEnabled` which
enables support for things like Minio to act as a provider.
This was broken in recent change.
…2629)

When deploying updates to a HorizontalPodAutoscaler, versioning them
will create multiple instances of the HPA, all of which still try to
target/autoscale the scaleTargetRef. Since I don't believe there are
other resource types that refer to a HPA, it is not clear that this
needs to be versioned like ConfigMaps or Secrets.
ezimanyi and others added 29 commits May 25, 2018 13:13
When multiple caching agents handle an on-demand refresh, the return
status is currently based only on the result of the last agent.  Instead
we should return a PENDING result if *any* of the caching agents find
results, rather than only if that last one does.
This is a significant performance improvement for large types.
Seeing a couple oddities and want to ensure they're unrelated
(or related) to this change.
Similar to #2666 but this time behind a
`caching.redis.treatRelationshipsAsSet` flag.
Titus per-account level flag `splitCachingEnabled: true` to turn on. Defaults to false (off).
This appears to have made some difference locally.
…2681)

Also updated the `ClusteredAgentScheduler` to compare `getAgentType()`
rather than `agent.class.simpleName` when determining whether the
agent should be scheduled.
`delegated.internalPool` vs `internalPool`

All instances of `JedisPool` should be `InstrumentedJedisPool` so should
be no need to worry about the status of `internalPool`.
* fix(titus): support scenario where there are no targetgroupservergroupproviders

* fix(titus): make getting server groups for target groups more resiliant to null data
LUA call was returning a Long not an Integer, deleteLock
was always returning false.

Previously we ignored the value, now we use it to drive
a logging message.
Added to make cleanup of versioned resources simpler
@guoyongzhang guoyongzhang merged commit f4d09ce into markxnelson:master Jun 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.