Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

twemproxy work with redis-sentinel #324

Open
wants to merge 36 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
a63cd7a
Merge pull request #1 from twitter/master
andyqzb Jan 7, 2014
47afc28
Merge pull request #2 from twitter/master
andyqzb Aug 22, 2014
49d62f7
Merge pull request #3 from twitter/master
andyqzb Oct 29, 2014
dcc2613
Merge pull request #4 from twitter/master
andyqzb Dec 2, 2014
1914e4a
add sentinels config to nc_conf
andyqzb Dec 14, 2014
a0ba197
add nc_sentinel.c which works with sentinel
andyqzb Jan 28, 2015
334b7e5
add mbuf_read_string and server_switch function
andyqzb Jan 29, 2015
9fde4c5
add conf_server member to struct server
andyqzb Jan 31, 2015
a6abc7e
modify sentinel_status to member of struct conn and fix some warning
andyqzb Feb 1, 2015
e0d8cf2
modify sentinel connect always no matter what preconnect is
andyqzb Feb 1, 2015
431889d
don't log fake request in req_log
andyqzb Feb 1, 2015
49c2d7b
add sentinel tag to struct server, add sentinel disconnect and deinit…
andyqzb Feb 5, 2015
b935675
fix sentinel sentinel disconnect bug, and add sentinel deinit check
andyqzb Feb 6, 2015
e27b51c
skip sentinel server's stats
andyqzb Feb 9, 2015
ca5311b
add sentinel reconnect after conn is closed
andyqzb Feb 15, 2015
36c5c2f
add sentinel conf, merge conn_get_sentinel to conn_get
andyqzb Feb 16, 2015
70cb70a
delete conn_get_sentinel prototype
andyqzb Feb 16, 2015
948ffb2
modify aligning of struct server
andyqzb Feb 27, 2015
2612dcf
add assert in server_init
andyqzb Mar 2, 2015
06303aa
add a sentinel address to pool sigma in example configure
andyqzb Mar 4, 2015
4faefb0
add sentinels description to README.md and redis.md
andyqzb Mar 31, 2015
95c87eb
merge twitter/master and resolve conflict
andyqzb Jul 12, 2015
b954260
update README.md
andyqzb Jul 12, 2015
b6b2c5d
log error msg when open temp conf failed
andyqzb Jul 18, 2015
be5c36d
add sentinel test cases
andyqzb Jul 20, 2015
9e77b58
copy sentinel binary in travis.sh
andyqzb Jul 20, 2015
a910ca2
update travis.sh to support sentinel
andyqzb Jul 21, 2015
131712c
support set keepalive interval parameter
andyqzb Jul 25, 2015
ea45e69
Merge pull request #5 from twitter/master
andyqzb Sep 16, 2016
fcc80f7
modify sentinel req to RESP prototol
andyqzb Sep 17, 2016
89ddda9
Merge branch 'twitter:master' into master
andyqzb Jul 2, 2021
081b327
Merge branch 'twitter:master' into master
andyqzb May 3, 2022
a62fa1d
modify sentinel tests to support python3 and fix merge conflict
andyqzb May 4, 2022
a7dc446
modify nc_sentinel.c to support multi version redis-sentinel
andyqzb May 4, 2022
c48722f
check the error code of conf_write
andyqzb May 4, 2022
ef3d8d0
continue to process left master info when proxy met a unknown server …
andyqzb Oct 9, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,10 @@ Twemproxy can be configured through a YAML file specified by the -c or --conf-fi
+ **server_retry_timeout**: The timeout value in msec to wait for before retrying on a temporarily ejected server, when auto_eject_host is set to true. Defaults to 30000 msec.
+ **server_failure_limit**: The number of consecutive failures on a server that would lead to it being temporarily ejected when auto_eject_host is set to true. Defaults to 2.
+ **servers**: A list of server address, port and weight (name:port:weight or ip:port:weight) for this server pool.
+ **sentinels**: A list of redis sentinel address, port and weight (name:port:weight or ip:port:weight) for this server pool. Weight of sentinel is not used.


For example, the configuration file in [conf/nutcracker.yml](conf/nutcracker.yml), also shown below, configures 5 server pools with names - _alpha_, _beta_, _gamma_, _delta_ and omega. Clients that intend to send requests to one of the 10 servers in pool delta connect to port 22124 on 127.0.0.1. Clients that intend to send request to one of 2 servers in pool omega connect to unix path /tmp/gamma. Requests sent to pool alpha and omega have no timeout and might require timeout functionality to be implemented on the client side. On the other hand, requests sent to pool beta, gamma and delta timeout after 400 msec, 400 msec and 100 msec respectively when no response is received from the server. Of the 5 server pools, only pools alpha, gamma and delta are configured to use server ejection and hence are resilient to server failures. All the 5 server pools use ketama consistent hashing for key distribution with the key hasher for pools alpha, beta, gamma and delta set to fnv1a_64 while that for pool omega set to hsieh. Also only pool beta uses [nodes names](notes/recommendation.md#node-names-for-consistent-hashing) for consistent hashing, while pool alpha, gamma, delta and omega use 'host:port:weight' for consistent hashing. Finally, only pool alpha and beta can speak the redis protocol, while pool gamma, deta and omega speak memcached protocol.
For example, the configuration file in [conf/nutcracker.yml](conf/nutcracker.yml), also shown below, configures 5 server pools with names - _alpha_, _beta_, _gamma_, _delta_ and omega. Clients that intend to send requests to one of the 10 servers in pool delta connect to port 22124 on 127.0.0.1. Clients that intend to send request to one of 2 servers in pool omega connect to unix path /tmp/gamma. Requests sent to pool alpha and omega have no timeout and might require timeout functionality to be implemented on the client side. On the other hand, requests sent to pool beta, gamma and delta timeout after 400 msec, 400 msec and 100 msec respectively when no response is received from the server. Of the 5 server pools, only pools alpha, gamma and delta are configured to use server ejection and hence are resilient to server failures. All the 5 server pools use ketama consistent hashing for key distribution with the key hasher for pools alpha, beta, gamma and delta set to fnv1a_64 while that for pool omega set to hsieh. Also only pool beta uses [nodes names](notes/recommendation.md#node-names-for-consistent-hashing) for consistent hashing, while pool alpha, gamma, delta and omega use 'host:port:weight' for consistent hashing. Finally, pool alpha, beta and sigma can speak the redis protocol, while pool gamma, deta and omega speak memcached protocol.

alpha:
listen: 127.0.0.1:22121
Expand Down Expand Up @@ -183,6 +184,22 @@ For example, the configuration file in [conf/nutcracker.yml](conf/nutcracker.yml
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1

sigma:
listen: 127.0.0.1:22125
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: false
redis: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:
- 127.0.0.1:6379:1 server1
- 127.0.0.1:6380:1 server2
sentinels:
- 127.0.0.1:26379:1
- 127.0.0.1:26380:1
- 127.0.0.1:26381:1

Finally, to make writing a syntactically correct configuration file easier, twemproxy provides a command-line argument -t or --test-conf that can be used to test the YAML configuration file for any syntax error.

## Observability
Expand Down
16 changes: 16 additions & 0 deletions conf/nutcracker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,19 @@ omega:
servers:
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1

sigma:
listen: 127.0.0.1:22125
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: false
redis: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:
- 127.0.0.1:6379:1 server1
- 127.0.0.1:6380:1 server2
sentinels:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: don't you need three sentinels for quorum in this example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mishan You can configure any number of sentinels no matter how many sentinels you use. Becase twemproxy just need connect to one of them to fetch the redis address.

Of cause, configuring all the sentinels you use is better. Because if some of the sentinels are dead, twemproxy can connect to the alive sentinels.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid being confused, I added one sentinel address to the example configuration.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is actually a good reason to connect to multiple Sentinels. If the Sentinel you are connected to is on the minority side of a network partition, and the original master is in that partition, it could report it's "old" master. By querying all of the Sentinels you'd detect it and route to the majority master.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therealbill In the partition, if twemproxy can only connect a part of the sentinels, and some told you new master, some told you old master. Twemproxy don't know whick is right if the both two sides don't have instances over half of all. Another, the design of sentinel doesn't work well in network partition, so I think take consideration of network partition doesn't make much sense.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Mar 24, 2015, at 10:48 PM, andy [email protected] wrote:

In conf/nutcracker.yml #324 (comment):

@@ -65,3 +65,18 @@ omega:
servers:
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1
+
+sigma:

  • listen: 127.0.0.1:22125
  • hash: fnv1a_64
  • distribution: ketama
  • auto_eject_hosts: false
  • redis: true
  • server_retry_timeout: 2000
  • server_failure_limit: 1
  • servers:
    • 127.0.0.1:6379:1 server1
    • 127.0.0.1:6380:1 server2
  • sentinels:
    @therealbill https://github.com/TheRealBill In the partition, if twemproxy can only connect a part of the sentinels, and some told you new master, some told you old master. While twemproxy don't know whick is right if the both two sides don't have instances over half of all. Another, the design of sentinel doesn't work well in network partition, so I think take consideration of network partition is not needed.

I think it is quite the contrary. If Twemproxy can’t verify the current master it should refuse to proxy commands for the server. If you’ve only got one sentinel you can communicate with, it would be wise to assume the current master is no longer the master, but minimally cautious to simply refuse the connections for that slot. That is preferable to having data go to multiple places. One of the big benefits to using Twemproxy+sentinel is the ability, if done correctly, to make split-brain scenario something the clients are immune to. I’ve done it outside of Twemproxy so it is certainly doable - and not terribly complicated.

Conceptually the best place to run those sentinels will be the Twemproxy nodes themselves. However that isn’t alway possible. As such your network between Sentinels and Sentinels+Redis can partition but twemproxy could still see the whole picture. But regardless by having the proxy not accept commands (or minimally not accept write commands) when you aren’t getting results which are confirmable you remove split-brain by nipping it at the very point it can happen - you stop proxying that node/slot.

The design of sentinel works fine in network partitions, it is the client which needs to be informed of changes, and Twemproxy is functionally the client in this scenario. Ideally, Twemproxy would subscribe to each sentinel’s master-change event channel.[1] Then, on a master-change event it would update that slot’s master setting and proxy to the new master. By only checking one sentinel you can’t reliably handle the case - and you don’t get the information as fast when you do get it. But by checking each of the sentinel servers you will either know you can’t communicate and can fail to deny-to-safe-mode, or catch the event from the majority’s side and know you need to reconfigure even though the other sentinel(s) hasn’t caught the change. Incidentally there is a lag between the failover and the non-leader Sentinels so even outside of a network split the proper route is to catch it from the leader.

Ideally you want these actions to be idempotent as well. In other words if you get a change notification multiple times, or the current pull, and the "new-master" is the same as last-updated-master, don’t make changes. I suspect you’ve already done that but sometime people miss that so I mention it solely out of that possibility. But always communicate with more than just one sentinel.

Perhaps to do this we could either:

  1. specify a quorum of sentinels to communicate with
  2. pull the list of known sentinels from sentinel and use that discovery to learn of and communicate with all of the other sentinels.

On the idea of not listing servers, perhaps we could do something like:

Where server name is use by twemproxy for slot identification, but podname is the name you pass to sentinel to get the master & slaves (remember, a sentinel can manage multiple Redis master/slave combos). Then you can do all discovery via sentinel.

On a related note I’m not seeing which code maps a given pod (“server” in Twemproxy parlance) to a given set of Sentinels. Each pod (server) may be managed by different sentinel pools. I’ve also not found where we allow you to use a pod-name other than the server name in the servers section. We should cover that as well. Perhaps something along the lines of (assuming we do not do full discovery via sentinel):
sentinel: true
servers:

Or (better, IMO):
sentinel: true
servers:


sentiinels:
IP:PORT:constellation1
IP:PORT:constellation1
IP:PORT:constellation1
IP:PORT:constellation2
IP:PORT:constellation2
IP:PORT:constellation2

Where constellation = “the group of sentinels managing a given pod”. This latter option could also work well for full-discovery in sentinel. I also think it looks the most clean and readable. We might consider quoting pod-name. IIRC you can pretty much use any ascii character other than a space in a name in sentinels.

Just throwing out some ideas to hopefully express better what I find as missing or ways we can do it even better. To say I use sentinel heavily might be a bit of an understatement.

I see we currently have IP:port:weight for sentinels, but I hope we aren’t using weight at all - ideally we should not have it at all to avoid confusion.

It’d been a long day and I’ll have more time of the next week or two to really dig into this. I apologize for not getting to it sooner, and am really glad to see someone taking the first steps in making what I maintain to be a significant improvement to Twemproxy! So thanks for getting this ball rolling. :D

Cheers,
Bill

  1. OK, Ideally I’d love to be able to fire twemproxy up unconfigured, then configure it entirely via API calls the same way you can w/Redis and Sentinel.

P.S. I’ll probably work on or find someone to work on adding Red Skull (http://redskull.io) connectivity as an alternative to Sentinel connectivity to simplify things - but not until I’ve added non-JSON API interfaces to Red Skull.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therealbill Thanks for your suggestions.
For the first suggestion, twemproxy should connect to all the sentinels. I think we shouldn't make twemproxy too complicated. It's just a client which supports sentinels. The problems in network partition should be solved by sentinel. For example, if the minority of sentinels don't server the clients, twemproxy will connect to a sentinel sunccessfully until it find a sentinel in majority part. Doing this in sentinel is simpler than doing it in twemproxy.
The podname suggestion is a good idea. I have thought to make mastername in sentinel as a mixed type like poolname-servername. So we can manage all the servers in different pool in the same sentinel. But I think the idea of specifying each server a different sentinel pool is not needed. I don’t think there are people want to config one pool's servers into different sentinel pools.
The weight in sentinel config is not used. I just want to reuse the code of loading server config in sentinel config load. Modifying the code to drop weight in sentinel load is better.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Mar 25, 2015, at 12:14 PM, andy [email protected] wrote:

In conf/nutcracker.yml #324 (comment):

@@ -65,3 +65,18 @@ omega:
servers:
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1
+
+sigma:

  • listen: 127.0.0.1:22125
  • hash: fnv1a_64
  • distribution: ketama
  • auto_eject_hosts: false
  • redis: true
  • server_retry_timeout: 2000
  • server_failure_limit: 1
  • servers:
    • 127.0.0.1:6379:1 server1
    • 127.0.0.1:6380:1 server2
  • sentinels:
    @therealbill https://github.com/TheRealBill Thanks for your suggestions.
    For the first suggestion, twemproxy should connect to all the sentinels. I think we shouldn't make twemproxy too complicated. It's just a client which supports sentinels.

Not quite - it is a client of sentinel. Thus it should be following the guidelines at http://redis.io/topics/sentinel-clients

The problems in network partition should be solved by sentinel. For example, if the minority of sentinels don't server the clients, twemproxy will connect to a sentinel sunccessfully until it find a sentinel in majority part.

How does it determine if the sentinel it connects to is in a minority? In order for it to do that it would have to compare master settings across the sentinel constellation. You can’t just ask sentinel if it is in a minority network partition. You also can’t query the “known sentinels” info for the pod because that isn’t updated except on new sentinel discovery and pod resets.

If a net split does occur and the original master is in the minority partition, twemproxy will still be connected to it. When sentinel initiates a fail-over normally, it sends a client kill to the old master (if available) to DC existing connections. However, in this scenario that won’t DC twemproxy. Thus Twemproxy needs to be checking/monitoring for failovers and updating/reconnecting as appropriate. Anything short of that means to have reliable redundancy and avoid or minimize the split-brain scenario you have to use the current method of rewriting the config and restarting twemproxy.

It just occurred to me you might be trying to implement this as “just” a TCP proxy/load balancer to Sentinels. I surely hope that isn’t the case as you can’t do it reliably. The way the Sentinel and client setup works clients need direct access to every sentinel for the reasons I listed above regarding why Twemproxy would need to do the things I’m talking about. Clients need to connect to the sentinel and do more than just get the current master. They need PUBSUB and the ability to talk to each sentinel directly to ensure they aren’t talking to a minority sentinel which contained the original master. Please tell me you’re not trying to make Twemproxy a load balancer for Sentinel. :) It would be either as complicated or, more likely, more complicated than what I am talking about above. If you’re using twemproxy you don’t want to talk directly to the backend redis instances - that defeats the main purpose of twemproxy.

Doing this in sentinel is simpler than doing it in twemproxy.

Except that Sentinel doesn’t control the clients directly. The mechanism for Sentinel helping clients is by either 1) having each client talk to and monitor every sentinel, or 2) having a script on the sentinels which reaches out to reconfigure the clients, or 3) by disconnecting clients connected after a reconfiguration of the pod (such as failovers).

The podname suggestion is a good idea. I have thought to make mastername in sentinel as a mixed type like poolname-servername. So we can manage all the servers in different pool in the same sentinel. But the idea of specifying each server a different sentinel pool is not needed. I don’t think there are people want to config one pool's servers into different sentinel pools.

While it isn’t the pattern I generally recommend, especially in the case of twemproxy supporting sentinel, it is actually very common for people to have each pod run it’s own sentinels, and I consistently run into objections to doing it any other way. Furthermore in environments where a tool such as Red Skull is used to provide a cluster of managed sentinel constellations it would be the case there as well. Sentinel does start running into issues when you have high amounts of pods being monitored. So breaking that out across a banks of sentinel instances is also an expected use case. Not supporting it backs a potential user into a corner where you can increase the time it takes to failover. Given that Twemproxy is by design setup to proxy to essentially a bank of Redis instances there is a higher than normal likelihood of multiple sentinels being used.

You can learn some of these issues as redis/redis#2257 and redis/redis#2045

The weight in sentinel config is not used. I just want to reuse the code of loading server config in sentinel config load. Modifying the code to drop weight in sentinel load is better.

Ok, sounds reasonable. You should update the docs for that section to be quite clear that weight isn’t used for sentinels. A few lines in the docs can be worth many questions as to why Twemproxy isn’t respecting the weights assigned. ;)

Cheers,
Bill

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@therealbill I have seen the guidelines of sentinel clients. It said client just need to connect one of the sentinels just like what the patch does. Just like what the guidelines said below, It won't work well in the network partition.

Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to slave, triggering a new disconnection. The same reasoning applies to connecting to stale slaves that will get reconfigured to replicate with a different master.

I think twemproy shouldn't do too much things about distributed system. Sentinel is responsible for the cluster. So if you want the cluster works well under the network partition, the sentinels of minority should refuse to server the clients just like zookeeper or chubby.
I have seen the redis issue 2257. I think it's a suggestion to reduce the connections of sentinels. Sentinel is ok when it has 2000+ connections(under the issue condition). The epoll can process tens of thousands of connections easily. Maybe It's a problem when the cluster has thousands of redis.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Mar 30, 2015, at 12:00, andy [email protected] wrote:

In conf/nutcracker.yml:

@@ -65,3 +65,18 @@ omega:
servers:
- 127.0.0.1:11214:100000
- 127.0.0.1:11215:1
+
+sigma:

  • listen: 127.0.0.1:22125
  • hash: fnv1a_64
  • distribution: ketama
  • auto_eject_hosts: false
  • redis: true
  • server_retry_timeout: 2000
  • server_failure_limit: 1
  • servers:
    • 127.0.0.1:6379:1 server1
    • 127.0.0.1:6380:1 server2
  • sentinels:
    @therealbill I have seen the guidelines of sentinel clients. It said client just need to connect one of the sentinels just like what the patch does. Just like what the guidelines said below, It won't work well in the network partition.

But the client of twemproxy can't use sentinel directly because twemproxy is "in the way". Since that was written we have developed operational knowledge and wisdom that tells us how to do it correctly, and it really isn't that hard. At a bare minimum you connect to each sentinel and pick the master reported by a majority of sentinels. If you can't reach said quorum, don't proxy. You can do it in a relatively simple script in HAProxy, so to argue it is overly complicated to do in Twemproxy doesn't hold IMO. It really is the least you should do.
Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to slave, triggering a new disconnection. The same reasoning applies to connecting to stale slaves that will get reconfigured to replicate with a different master.

I think twemproy shouldn't do too much things about distributed system.

Then don't use it with sentinel. By placing Twemproxy in front and having sentinel hidden from the clients you're preventing the clients from doing what you describe. You take on the responsibility of protecting the clients when you prevent them from protecting themselves.

Doing the simple thing is not always the right thing. If you're going to make it impossible for the client to protect itself, you must take the precautions on it's behalf. Twemproxy partitions data, and with the addition of sentinel support it thus becomes a core part of a distributed system. At that point the assertion it shouldn't do the right thing with regard to proper behavior in distributed systems is rather moot.

Sentinel is responsible for the cluster. So if you want the cluster works well under the network partition, the sentinels of minority should refuse to server the clients just like zookeeper or chubby.

Again, how exact do you think the clients can learn the sentinel they are talking to is in a minority partition? If you're going to propose they can, you have to show how. It also would be how sentinel would do it. There is one way: ask all sentinels you can connect to, and you can't get a majority of known sentinels, stop proxying the connections.

You can wax on all day about what sentinel could or should do, but you're dealing with Twemproxy, not sentinel. I'm not sure you understand what sentinel is, as it doesn't serve clients it is a lookup service.

When a network partition happens, sentinel has no way to know if a new master has been elected, and as such takes no action. It doesn't know if it is alone and all sentinels are alone (thus no master change happens) or it is in the only minority. And even then, it doesn't know who the new master is. Thus it can't know it is "in the minority" partition. Therefore it can't tell the clients to stop talking to the server as they may still be talking to the right server.

But Twemproxy can figure it out. Any client can, until you put Twemproxy in front of it and prevent it from doing so.

I have seen the redis issue 2257. I think it's a suggestion to reduce the connections of sentinels. Sentinel is ok when it has 2000+ connections(under the issue condition). The epoll can process tens of thousands of connections easily.

The point of this issue, Andy, is showing that the condition you think is rare is actually common and in a non-trivial amount of the time required. This is just one of many conditions I've seen. I've seen thousands of these setups, multiple sets of sentinels across a bank of pods is not a rare condition. Nor is it going to become one anytime soon.

Cheers,
Bill

- 127.0.0.1:26379:1
- 127.0.0.1:26380:1
- 127.0.0.1:26381:1
24 changes: 24 additions & 0 deletions notes/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -460,4 +460,28 @@
+ *MUST* set all redis with a same passwd, and all twemproxy with the same passwd
+ Length of password should less than 256

## redis-sentinel feature

+ You can configure sentinel for a pool with 'sentinels' to let twemproxy works with sentinel:

sigma:
listen: 127.0.0.1:22125
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: false
redis: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:
- 127.0.0.1:6379:1 server1
- 127.0.0.1:6380:1 server2
sentinels:
- 127.0.0.1:26379:1
- 127.0.0.1:26380:1
- 127.0.0.1:26381:1

+ notice:
+ You should configure all the sentinels you used. Twemproxy will connect to the alive sentinels when some are down
+ Weight of sentinel is not used. Twemproxy keep it because of server load code reuse


1 change: 1 addition & 0 deletions src/Makefile.am
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ nutcracker_SOURCES = \
nc_connection.c nc_connection.h \
nc_client.c nc_client.h \
nc_server.c nc_server.h \
nc_sentinel.c nc_sentinel.h \
nc_proxy.c nc_proxy.h \
nc_message.c nc_message.h \
nc_request.c \
Expand Down
Loading