-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple workers cause multiple build servers and race conditions #94
Comments
First thing I'd like to understand: why people are even using multiple workers in dev environment. If this is something people do a lot for legit reasons, we probably need to build some sort of workaround to support it. If it's very specific and rare, well, in this case a warning in a README stating that the gem is designed to work in a single-worker environment should be enough. |
We try to have the exact same environment in development as we have in production, therefor we run thin as web server in all environments. |
@himynameisjonas I was under the impression that Thin doesn't have multiple workers. iirc |
@rondale-sc oh, my misstake. I hade multiple servers running at the same time (Used to start 4 instances with http://pow.cx ). With just one server running Thin work fine. |
But i use the same server in dev/prod for the reason i wrote in my first comment, but in this case i'm lucky we're running Thin and not puma/unicorn :) |
@himynameisjonas Glad that worked out. This shouldn't be an issue with production deploys because all ember-cli-rails does (if you asset precompile) is make the assets available to Rails which it does only once per ember app not per request. This is only a problem when multiple workers are being used in development, which is not the default for puma (iirc). Closing this for now. In #86 @sevos had a solution for this listed in his comment. I think that should suffice. Adding that to the readme might be a good idea. (what do you think @sevos?) Cheers 🍻 |
@rondale-sc @rwz we use unicorn in development @ thoughtbot and we've recently run into this issue. |
See issues #89 & #85.
The text was updated successfully, but these errors were encountered: