-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Twemproxy going OOM #553
Comments
@auror Do you use pipeline mode with client? |
@charsyam No, we aren't. |
what is common data size of item for get command? |
Key size is 12 bytes and value could be between 30 - 1024 bytes |
Observation: I observed high values in Recv Q(from netstat) of sockets made to twemproxy from application when twemproxy's RSS memory started growing. Wanted to see if application restart would bring memory down. But it stayed constant
|
@auror Thanks. I think 2.2K QPS * 16k mbuf is not too big. twemproxy expect client send requests sequentially. so that client doesn't wait response. it can exhaust all memory. |
Our application is the only one connected to this twemproxy instance and afaik xmemcached doesn't pipeline requests internally |
@auror oh, I just thought your backend is redis, but it's memcached. |
It's actually Kyoto Tycoon which also serves requests from clients who speak Memcached Protocol. These Kyoto instances run on remote hosts |
@auror Did you see any other performance down in Kyoto Tycoon? |
@charsyam performance down as in High Response times? Yes. There is a little degradation in Kyoto's performance recently. It's significant(but not very high) at 90th percentile. Does that impact twemproxy? But, there's a 30 ms timeout configured at twemproxy. Doesn't that help? Can you please elaborate? |
Actually, twemproxy can send only one request with one connection. so backend server returns response lately, twemproxy has to keep other requests in its own buffer. so, it will use much memory to keep client's requets. but I don't know about How Kyoto Tycoon works. so, it is just guessing. |
@charsyam Thanks So, when does twemproxy discard the outstanding buffered requests? So if one of the hosts starts becoming unresponsive or timing out, is ejecting them the only way? |
Can you please have a look @ my question ^^^. Also, we've added more capacity and the performance has come back to normal. But yet, we're still observing high memory consumption and eventual death of twemproxy process(but, not so quite often as earlier) |
@auror sorry to reply late. |
Np. So, ho do we debug this? Do you need any additional info? Any clue/direction would help. |
Hi,
We're running twemproxy behind set of memcached servers. Below's the obfuscated config
At some point twemproxy started using very large amount of memory. Also saw the connection timeouts around the same time:
close s: Connection timed out
in debug logsMemory consumption grows high until Kernel kills the nutcracker instance
The text was updated successfully, but these errors were encountered: