-
Notifications
You must be signed in to change notification settings - Fork 732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for connections to multiple servers in cluster #39
Comments
I think this is a feature that is really needed. As far as I remember, someone started to work on that some time ago, or at least we were discussing it. But I haven't received any pull request for this yet. Probably this refactoring could be done in combination with the refactoring for adding different transportations (see issue #33) as it needs some additions to the client. |
I added support for multiple servers in my last commit (see link above). Currently just a round robin is made. You can use it by just passing an array of servers with hostname and port to the client:
|
Thanks so much for a quick turnaround! |
Can you do some testing on it? Probably there are still some issues I haven't thought of yet. |
There is now also a blog entry about it. I hope to improve it in the near future. I will close this issue now. http://ruflin.com/en/elastica/165-using-elastica-with-multiple-elasticsearch-nodes |
Both the perl API: http://search.cpan.org/~drtech/ElasticSearch-0.37/lib/ElasticSearch.pm#new() and the Ruby rubberband API: https://github.com/grantr/rubberband/blob/master/lib/elasticsearch/client/retrying_client.rb support connections to multiple elasticsearch nodes in a cluster, including round-robin load balancing, auto retrying, and auto discovery of nodes after initial connection. Would you considering adding similar features to Elastica? It would eliminate a single point of failure in a clustered environment in the event the node that the Elastica client is hitting goes down.
Thank you!
The text was updated successfully, but these errors were encountered: