Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS DescribeAutoScalingGroups requests too aggressive - API limits reached #422

Merged
merged 2 commits into from
Nov 1, 2017

Conversation

mmerrill3
Copy link

Fix for issue 252 where excessive API calls to AWS are the result of leaking go routines from the Polling auto scaler.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 25, 2017
@mmerrill3
Copy link
Author

mmerrill3 commented Oct 26, 2017

on a side note, I have a mac, and it seems even if I set my GOOS to linux, I am having issues building/testing locally using the master as my baseline.

See below:

mmerrillmbp:cluster-autoscaler mmerrill$ go version
go version go1.8.3 darwin/amd64
mmerrillmbp:cluster-autoscaler mmerrill$
mmerrillmbp:cluster-autoscaler mmerrill$
mmerrillmbp:cluster-autoscaler mmerrill$
mmerrillmbp:cluster-autoscaler mmerrill$
mmerrillmbp:cluster-autoscaler mmerrill$ make test-unit
rm -f cluster-autoscaler
go get github.com/tools/godep
GOOS=linux godep go build ./...
vendor/github.com/google/cadvisor/container/crio/factory.go:24:2: no buildable Go source files in /Users/mmerrill/go/src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/google/cadvisor/container/libcontainer
godep: go exit status 1
make: *** [build] Error 1

Update, I see #325 and this is no longer an issue.

@MaciekPytel
Copy link
Contributor

@mmerrill3 Can you squash your commits and add a comment once this is ready for review? Thanks.

@mmerrill3 mmerrill3 force-pushed the master branch 5 times, most recently from 29f2d93 to c7cafdf Compare October 28, 2017 11:58
@mmerrill3
Copy link
Author

@MaciekPytel, the commits are squashed and this is ready for a review.

@@ -57,6 +57,9 @@ type CloudProvider interface {
// GetResourceLimiter returns struct containing limits (max, min) for resources (cores, memory etc.).
GetResourceLimiter() (*ResourceLimiter, error)

// Close cleans up open resources before the cloud provider is destroyed, i.e. go routines etc.
Close() error
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: In case there's no difference, could we rename this Close to Cleanup so that there are less func names to remember? 😉

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will update this with the renamed interface method soon.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mmerrill3 Thanks for catching it!

I'd appreciate it if you could also rebase this onto the latest master so that hopefully @MaciekPytel can merge this earlier before this gets conflicted again 😃

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am handling rebasing now.

if err := registry.regenerateCache(); err != nil {
glog.Errorf("Error while regenerating Asg cache: %v", err)
}
}, time.Hour)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this does look bad. Thanks for the good catch!

@mumoshu
Copy link
Contributor

mumoshu commented Oct 31, 2017

One nit, but LGTM. Thank you very much for trying to fix the issue 👍

@mwielgus
Copy link
Contributor

mwielgus commented Nov 1, 2017

@mmerrill3 my attempt to resolve conflicts in github failed. Could you please do it yourself and push the new commit?

@mwielgus
Copy link
Contributor

mwielgus commented Nov 1, 2017

 k8s.io/autoscaler/cluster-autoscaler/cloudprovider/gce
cloudprovider/gce/gce_cloud_provider.go:169: ambiguous selector gce.gceManager.GetResourceLimiter
cloudprovider/gce/gce_manager.go:109: duplicate method GetResourceLimiter
# k8s.io/autoscaler/cluster-autoscaler/cloudprovider/gce
cloudprovider/gce/gce_cloud_provider.go:169: ambiguous selector gce.gceManager.GetResourceLimiter
cloudprovider/gce/gce_manager.go:109: duplicate method GetResourceLimiter
godep: go exit status 2

And please squash the commits.

@mmerrill3 mmerrill3 force-pushed the master branch 2 times, most recently from c8d7ec9 to eae3fdf Compare November 1, 2017 16:18
@mmerrill3
Copy link
Author

@mwielgus, thanks for your patience, I have rebased my forked master branch, squashed the commits, and updated the interface method per feedback from @mumoshu.

@mwielgus
Copy link
Contributor

mwielgus commented Nov 1, 2017

/lgtm
Thank you for fixing this! :)

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 1, 2017
@mwielgus mwielgus merged commit d25acce into kubernetes:master Nov 1, 2017
@johanneswuerbach
Copy link
Contributor

Any chance to get this cherry picked into the next 1.0 patch release? Would this require a cherry pick PR?

@mwielgus
Copy link
Contributor

mwielgus commented Nov 1, 2017

@johanneswuerbach - please create a cherry-pick

@johanneswuerbach
Copy link
Contributor

@mwielgus done #445

yaroslava-serdiuk pushed a commit to yaroslava-serdiuk/autoscaler that referenced this pull request Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants