You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The way adapters work, every message gets broadcast to all instances of the service. This represents a scalability problem, especially in scenarios with a lot of instances and relatively small rooms.
We use the redis adapter, and one of our services is very inefficient due to this mechanic. Using round numbers to simplify:
Lets say our room size is 10.
Each instance right now is handling 100 clients.
We have 100 instances of the service running.
We have a total of 1K rooms with clients connected at any given time.
If you assume the best case scenario, where each client in a instance belongs to a different room, then, on average, 90% of messages the instance gets from the adapter are discarded, due to no clients in that room.
This is specially problematic since scaling the service up ends up in a diminishing returns mode.
If you double the amount of instances of your service, each one will handle half the client connections, but will still need to handle all messages from every room. Eventually (as in our case), you end up with a ton of instances of the service, with very few clients, where most of the work is just reading from redis and throwing it away.
Describe the solution you'd like
A way for adapters to only listen for messages of rooms for which they have clients connected.
Describe alternatives you've considered
For redis in particular, I've looked into reimplementing the adapter to do one subscription per room, but there is a trade-off. Per redis docs for PUBLISH:
Time complexity:
O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).
I understand why the default is what it is, based on that. I expect the most common use case to have a lot less instances. So it makes sense to use a single subscription.
But it would be nice to have a different mode, where you trade off having a lot more subscriptions, for more granular message broadcasting.
We scale this particular redis instance due to network usage, not cpu or memory.
Another option is to do sticky sessions based on what room we expect the client to be, but this breaks in scenarios where your services scales up and down, as the slot for any given session might change once new instances are spun up.
Additional context
I checked the rest of the adapters, and it looks like they all share this behavior. That's why I'm opening the issue here and not on the redis-adapter repo, as this might be something that needs to be addressed at the adapter definition / features level
The text was updated successfully, but these errors were encountered:
Interesting, I think I looked at the sharded adapter but discarded it when I saw it needing to use redis in clustered mode (though not a deal breaker for our use-case, probably cheaper than the bandwidth 😆 ).
Out of curiosity:
As far as I understand, there is nothing in the sharded pub/sub in redis that enables those modes. ie: They could be backported to the non-sharded adapter as well, right?
Don't read that as a feature request, I'm just trying to figure out if there is a constraint I'm not seeing, or if I'm just below the threshold where going to clustered redis makes sense anyways.
Is your feature request related to a problem? Please describe.
The way adapters work, every message gets broadcast to all instances of the service. This represents a scalability problem, especially in scenarios with a lot of instances and relatively small rooms.
We use the redis adapter, and one of our services is very inefficient due to this mechanic. Using round numbers to simplify:
Lets say our room size is 10.
Each instance right now is handling 100 clients.
We have 100 instances of the service running.
We have a total of 1K rooms with clients connected at any given time.
If you assume the best case scenario, where each client in a instance belongs to a different room, then, on average, 90% of messages the instance gets from the adapter are discarded, due to no clients in that room.
This is specially problematic since scaling the service up ends up in a diminishing returns mode.
If you double the amount of instances of your service, each one will handle half the client connections, but will still need to handle all messages from every room. Eventually (as in our case), you end up with a ton of instances of the service, with very few clients, where most of the work is just reading from redis and throwing it away.
Describe the solution you'd like
A way for adapters to only listen for messages of rooms for which they have clients connected.
Describe alternatives you've considered
For redis in particular, I've looked into reimplementing the adapter to do one subscription per room, but there is a trade-off. Per redis docs for PUBLISH:
I understand why the default is what it is, based on that. I expect the most common use case to have a lot less instances. So it makes sense to use a single subscription.
But it would be nice to have a different mode, where you trade off having a lot more subscriptions, for more granular message broadcasting.
We scale this particular redis instance due to network usage, not cpu or memory.
Another option is to do sticky sessions based on what room we expect the client to be, but this breaks in scenarios where your services scales up and down, as the slot for any given session might change once new instances are spun up.
Additional context
I checked the rest of the adapters, and it looks like they all share this behavior. That's why I'm opening the issue here and not on the redis-adapter repo, as this might be something that needs to be addressed at the adapter definition / features level
The text was updated successfully, but these errors were encountered: