Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Provide a way to stop cluster sync from moving an actor to another peer #627

Closed
CristianoBarone opened this issue Feb 19, 2025 · 7 comments · Fixed by #632
Closed
Assignees
Labels
enhancement feature New feature or request

Comments

@CristianoBarone
Copy link

CristianoBarone commented Feb 19, 2025

Problem description
When using discovery, it can happen that the peer needs one specific actor running only on that peer. So if the peer terminates or needs to rebalance, that one actor must not be moved to another peer. As far as the documentation provides, it seems like there is no such feature yet.

Solution description
Implement a actor.SpawnOption that tells the rebalancer to never move the spawned actor to another peer. Something like actor.WithOutRebalancing().

Alternative solutions
Implement a actor.ClusterConfig function that ignores specific nodes. Something like actor.IgnoreKinds().

Additional context
.

@CristianoBarone CristianoBarone added enhancement feature New feature or request labels Feb 19, 2025
@Tochemey Tochemey self-assigned this Feb 19, 2025
@Tochemey
Copy link
Owner

@CristianoBarone Thanks for the feature request. Much appreciated. I will take a look at this use case and hopefully add it in the coming weeks.

@Tochemey
Copy link
Owner

@CristianoBarone Can you please explain the usecase here?.

@CristianoBarone
Copy link
Author

@CristianoBarone Can you please explain the usecase here?.

Say I have a system which manages a series of node flows. In the event of a peer closing down, I absolutely need to move them to avoid service disruption. In this, GoAkt is pretty solid.

The problem however, is when the system must rely on an actor to retrieve a configuration or expose an endpoint. In this case, it seems that there's no way to ensure that the one specific actor handling this won't get moved in rebalancing, meaning that if one peer overloads then it won't be able to function properly anymore as a whole.

And there's also the second problem that stems from the whole situation, which is that one peer will have two nodes handling configuration while another has none, or when a peer is closed, there will be another peer with two actors that serve the same purpose.

One of the actors would be sitting around doing nothing, while it should actually be doing things on the other peer or shouldn't be up at all.

@Tochemey
Copy link
Owner

@CristianoBarone Thank you for the use case. I know what to add a feature to handle such use case.

@Tochemey
Copy link
Owner

Tochemey commented Feb 21, 2025

@CristianoBarone I have another type of actor called singleton actor. This type of actor has the following behaviors:

  • Only a single instance of that type of actor will be created in the whole cluster. This type of actor is only created when cluster mode is activated.

  • A singleton actor will only be created on the oldest node of the cluster.

  • When the oldest node of the cluster shutdowns, the singleton actor will be recreated on the newest oldest node of the cluster.

  • A singleton actor is created with the default actor system supervisor and directive. One cannot set custom mailbox to a singleton actor.

  • They can only be accessed via their name(alias)

@Tochemey
Copy link
Owner

Tochemey commented Feb 21, 2025

@CristianoBarone I am yet to cut a new release tag. You can try it with the latest pre-release https://github.com/Tochemey/goakt/tree/v3.1.0-alpha.1.

There is a doc that explains the concept: https://tochemey.gitbook.io/goakt/features/cluster/cluster-singleton

@Tochemey
Copy link
Owner

@CristianoBarone the latest release with your feature request is here https://github.com/Tochemey/goakt/releases/tag/v3.1.0. Happy hacking 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants