Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cache at provider level #3402

Closed
wants to merge 6 commits into from

Conversation

tjamet
Copy link
Contributor

@tjamet tjamet commented Feb 13, 2023

Description

In the current implementation, DNS providers are called to list all records on every loop. This is expensive in terms of number of requests to the provider and may result in being rate limited, as reported in 1293 and 3397.

In our case, we have approximately 20,000 records in our AWS Hosted Zone. The ListResourceRecordSets API call allows a maximum of 300 items per call. That requires 67 API calls per external-dns deployment during every sync period

With this, we introduce an optional generic caching mechanism at the provider level, that re-uses the latest known list of records for a given time.

This prevents from expensive Provider calls to list all records for each object modification that does not change the actual record (annotations, statuses, ingress routing, ...)

This introduces 2 trade-offs:

  1. Any changes or corruption directly on the provider side will be longer to detect and to resolve, up to the cache time

  2. Any conflicting records in the DNS provider (such as a different external-dns instance) injected during the cache validity will cause the first iteration of the next reconcile loop to fail, and hence add a delay until the next retry

Checklist

  • Unit tests updated
  • End user documentation updated

@k8s-ci-robot k8s-ci-robot added the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Feb 13, 2023
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 13, 2023
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. labels Feb 13, 2023
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 25, 2023
@szuecs
Copy link
Contributor

szuecs commented May 15, 2023

@tjamet do you want to rebase it?
Code looks good to me

@szuecs
Copy link
Contributor

szuecs commented May 15, 2023

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label May 15, 2023
log.Info("Records cache provider: refreshing records list cache")
c.cache, c.err = c.Provider.Records(ctx)
if c.err != nil {
log.Errorf("Records cache provider: list records failed: %v", c.err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't both log and return an error.

c.lastRead = time.Now()
cachedRecordsCallsTotal.WithLabelValues("false").Inc()
} else {
log.Info("Records cache provider: using records list from cache")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a bit chatty. Perhaps debug level would be better?

func (c *CachedProvider) Records(ctx context.Context) ([]*endpoint.Endpoint, error) {
if c.needRefresh() {
log.Info("Records cache provider: refreshing records list cache")
c.cache, c.err = c.Provider.Records(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's actually no reason to store err in the CachedProvider. If this call returns a non-nil error, then it should just set c.cache = nil.

@maiconbaum
Copy link

This would be a really useful feature.

@pabloajz
Copy link

pabloajz commented Jun 1, 2023

really looking forward to have this feature.

@johngmyers
Copy link
Contributor

The txt registry has a similar cache. It would make sense to pull this to a separate layer above the registry.

@tjamet
Copy link
Contributor Author

tjamet commented Sep 7, 2023

Hi!
Sorry, I missed your comment.
I will rebase ASAP!

Thibault Jamet and others added 5 commits September 8, 2023 13:49
**Description**

In the current implementation, DNS providers are called to list all
records on every loop. This is expensive in terms of number of requests
to the provider and may result in being rate limited, as reported in 1293
and 3397.

In our case, we have approximately 20,000 records in our AWS Hosted Zone.
The ListResourceRecordSets API call allows a maximum of 300 items per call.
That requires 67 API calls per external-dns deployment during every sync period

With this, we introduce an optional generic caching mechanism at the provider
level, that re-uses the latest known list of records for a given time.

This prevents from expensive Provider calls to list all records for each
object modification that does not change the actual record (annotations,
statuses, ingress routing, ...)

This introduces 2 trade-offs:

1. Any changes or corruption directly on the provider side will be
longer to detect and to resolve, up to the cache time

2. Any conflicting records in the DNS provider (such as a different
external-dns instance) injected during the cache validity will cause
the first iteration of the next reconcile loop to fail, and hence add a
delay until the next retry

**Checklist**

- [X] Unit tests updated
- [X] End user documentation updated

Change-Id: I0bdcfa994ac1b76acedb05d458a97c080284c5aa
Change-Id: Icaf1ffe34e75c320d4efbb428f831deb8784cd11
Change-Id: I3f4646cabd66216fd028fbae3adf68129a8a2cbf
Change-Id: I13da2aa28eef3e2c8e81b502321c4dc137087b2d
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign seanmalloy for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 8, 2023
@tjamet
Copy link
Contributor Author

tjamet commented Sep 8, 2023

@tjamet do you want to rebase it? Code looks good to me

@szuecs I have rebased the code atop the head of master.

The txt registry has a similar cache. It would make sense to pull this to a separate layer above the registry.

@johngmyers Thanks for the input.
Indeed, the txt registry also has a cache mechanism.
Here, the proposal is to cache at the lower level rather than at a higher one, to increase consistency.
Indeed, the actual record provider is injected into the TXT registry as well as any other registry like dynamoDB.

Getting the cache at lower levels increases the cache success rate and further reduces the calls to the service provider.
Indeed, if we cache at a higher level, I sense we need to either implement it in all registries (TXT, DynamoDB, AWS-SD, ...) or refactor significantly how the plan is calculated so we can have the cache management above the registry level.
Indeed, currently, the main logic of filtering owned domains is executed in each registry
This means that, from my trials, we would invalidate the cache any time almost every time when external-dns is used concurrently either with other domains or in multiple clusters.

This being said, I tend to indeed see that the txt cache is redundant here and we could deprecate or remove it.
Happy to do so if we align on this, either in this PR or another one

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 15, 2023
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@tjamet: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-external-dns-licensecheck 32c67df link true /test pull-external-dns-licensecheck

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@mloiseleur
Copy link
Contributor

@tjamet Do you think you can rebase this PR ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 15, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 15, 2024
@oferz-everc
Copy link

looking forward to this feature.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@oferz-everc
Copy link

@tjamet - Any chance to re-visit this PR? Anything I can do to assist with?

@tjamet
Copy link
Contributor Author

tjamet commented Jul 8, 2024

Hi!
I have just rebased this PR to include latest changes from master.

Let me know if there is any other thing I should take a look at

/reopen

@k8s-ci-robot
Copy link
Contributor

@tjamet: Failed to re-open PR: state cannot be changed. The add-provider-cache branch was force-pushed or recreated.

In response to this:

Hi!
I have just rebased this PR to include latest changes from master.

Let me know if there is any other thing I should take a look at

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@tjamet tjamet mentioned this pull request Jul 8, 2024
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants