Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Needs better dampening? #3

Closed
markpundsack opened this issue Mar 21, 2014 · 1 comment
Closed

Needs better dampening? #3

markpundsack opened this issue Mar 21, 2014 · 1 comment

Comments

@markpundsack
Copy link

With no traffic on an app running on my local box, it keeps adding, then removing, then adding workers.

15:03:48 web.1    | PumaAutoTune (105s): Potential memory leak. Reaping largest worker puma.largest_worker_memory_mb=106.13671875 puma.resource_ram_mb=424.1015625 puma.current_cluster_size=4
15:03:48 web.1    | PumaAutoTune (105s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=429.6328125 puma.current_cluster_size=4
15:03:49 web.1    | PumaAutoTune (105s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=450.79296875 puma.current_cluster_size=4
15:03:49 web.1    | PumaAutoTune (106s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=341.35546875 puma.current_cluster_size=4
15:03:59 web.1    | PumaAutoTune (116s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=228.1171875 puma.current_cluster_size=2
15:04:09 web.1    | PumaAutoTune (126s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=228.19921875 puma.current_cluster_size=2
15:04:12 web.1    | [74805] + Gemfile in context: /Users/mpundsack/Dropbox/Heroku/src/dashboard/Gemfile
15:04:12 web.1    | [72445] - Worker 0 (pid: 74805) booted, phase: 0
15:04:19 web.1    | PumaAutoTune (136s): Potential memory leak. Reaping largest worker puma.largest_worker_memory_mb=96.01171875 puma.resource_ram_mb=317.01171875 puma.current_cluster_size=3
15:04:19 web.1    | PumaAutoTune (136s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=321.58984375 puma.current_cluster_size=3
15:04:19 web.1    | PumaAutoTune (136s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=339.21875 puma.current_cluster_size=3
15:04:19 web.1    | PumaAutoTune (136s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=237.6640625 puma.current_cluster_size=3
15:04:30 web.1    | PumaAutoTune (146s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=132.56640625 puma.current_cluster_size=1
15:04:32 web.1    | [74923] + Gemfile in context: /Users/mpundsack/Dropbox/Heroku/src/dashboard/Gemfile
15:04:32 web.1    | [72445] - Worker 0 (pid: 74923) booted, phase: 0
15:04:40 web.1    | PumaAutoTune (157s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=221.703125 puma.current_cluster_size=2
15:04:42 web.1    | [74947] + Gemfile in context: /Users/mpundsack/Dropbox/Heroku/src/dashboard/Gemfile
15:04:42 web.1    | [72445] - Worker 1 (pid: 74947) booted, phase: 0
15:04:50 web.1    | PumaAutoTune (167s): Potential memory leak. Reaping largest worker puma.largest_worker_memory_mb=89.16796875 puma.resource_ram_mb=310.953125 puma.current_cluster_size=3
15:04:50 web.1    | PumaAutoTune (167s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=315.99609375 puma.current_cluster_size=3
15:04:50 web.1    | PumaAutoTune (167s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=329.609375 puma.current_cluster_size=3
15:04:50 web.1    | PumaAutoTune (167s): Cluster too large. Resizing to remove one worker puma.resource_ram_mb=221.98828125 puma.current_cluster_size=3
15:05:00 web.1    | PumaAutoTune (177s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=222.0625 puma.current_cluster_size=2
15:05:10 web.1    | PumaAutoTune (187s): Cluster too small. Resizing to add one more worker puma.resource_ram_mb=222.1796875 puma.current_cluster_size=2
@markpundsack
Copy link
Author

Also, why do I get multiple messages with the same cluster target size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants