-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Round-Robin Load Balancing #72
Comments
The challenge in the current object model is finding somewhere to store this per-backend-set state. Health checks also throw this off since they keep churning the list of available backends. |
I have started writing an implementation for this. |
I talked to @anurse about the design, and what we came up with was different from what I was originally thinking. I was originally thinking of having a cache based on a signature for the collection of endpoints storing the counter for a mod operation. Implementation wise, would it make sense to have a feature collection on the back end, similar to HttpContext, and proxy code can then stash/fetch into that as needed. |
Yes there will be to be some way to track state in the backend, and that will need to be thread safe. |
Should this just be a counter property on BackEndInfo, or should we have a dictionry? Do we have a thread-safe dictionary? |
ConcurrentDictionary might work, we'll see if we need to go that far. We'll also need to account for the fact that the Backend objects are torn down and rebuilt if there's a config change so any state you add to them is ephemeral. That might be OK to reset RoundRobin on a config change, they're not that common. There's a type in this repo called AtomicCounter that would be useful for tracking a simple incrementing value. If we're not overly concerned about the endpoints list being unstable then the counter might be all we need. All of this being said, RoundRobin isn't much better than Random, I'm not sure it's worth adding. |
I don’t think a dictionary will be needed, just an atomic counter associated with the backend. I also think it’s totally reasonable to reset round-robin state when a backend is reloaded. It should be preserved when other config reload if possible though.
Sounds perfect
Think of it more as a price-of-admission feature. We can debate the value of it but given its broad support across the rest of the proxy landscape I doubt its reasonable to skip it. |
I can't remember where I heard it, but one of the things cloud customers expect is that when they pay for 5 VM's and put a load balancer in front of them, is that all 5 will have about equal loads.
The design question is do we add this specific property to the backend configuration, or do we add "feature" support to the backend as a dictionary. If we did the latter, then could the dictionary be maintained across reloads - if the name of the backend matches then it should be retained. To notify the features, the types could implement a INotifyConfigChange interface that if implemented would be called. The reason that I am pushing this direction is that it feels like it would enable more extensibility scenarios - which should forever be in the back of our minds. |
This is definitely a use case for round robin I've heard. We can debate it's reasonability, but people do have a desire to balance the load, even if no backends are actually overloaded.
Yep, agreed. One wrinkle is that |
Load balance by taking the "next" backend in the list for each incoming request. We wrap around to the start of the list when we run out of backends.
Since requests will likely be highly-concurrent, there's some complexity here in how the state is maintained. Also, since backends can change, we'll need to consider how this algorithm behaves in that case. Naively, this would be a simple atomic counter that the load balancer does an atomic increment-and-return, then modulus by the number of backends to figure out which backend to use.
The text was updated successfully, but these errors were encountered: