You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You run this using colmena apply. You're happy with your shiny new etcd cluster.
Suddenly node1 fails with a broken root disk. You replace it and install nixos with a minimal config. And now you're trying to get your previous config deployed again.
First issue: Your cluster isn't new anymore. initialClusterState needs to be changed to existing, otherwise the other 2 nodes won't accept the new node1. Changing that nix config isn't a viable option, you don't want your teammates changing code to accomodate for different stages. My current "hot idea" is using nix specializations. A default one for the regular runtime configuration (with existing to replace failed nodes), and a bootstrap one for... bootstrapping. I just haven't figured out yet, how to use them with colmena.
Second issue: Even after setting initialClusterState = "existing", the other 2 nodes will reject the reinstalled first node. You need to remove the broken node from the cluster, and add it again to make things work. This is definitely not an issue caused by nix, but it highlights the issue very well. There's a lot of software out there preferring runtime configuration changes through apis, clis and more. Declarative configuration? Nobody got time for that.
This is not meant to be a rant. I'm looking for ideas, best practices, the "nix way" of doing things. Issue 1 is the bigger problem for me right now. The second issue can be worked around using scripts, defined procedures, maybe ansible. "Before you reinstall that node, run following commands on a functional cluster member: etcdctl member remove...". A lot of documentation, less automation.
Someone please save me from more headache...
The text was updated successfully, but these errors were encountered:
Good morning!
I'm currently trying to deploy an etcd cluster on 3 nodes using colmena. It works great, except when handling different lifecycle stages.
To make it a bit more graphical, imagine this:
You have 3 clean, fresh nodes. You use following nix config to deploy the cluster:
You run this using
colmena apply
. You're happy with your shiny new etcd cluster.Suddenly
node1
fails with a broken root disk. You replace it and install nixos with a minimal config. And now you're trying to get your previous config deployed again.First issue: Your cluster isn't
new
anymore.initialClusterState
needs to be changed toexisting
, otherwise the other 2 nodes won't accept the newnode1
. Changing that nix config isn't a viable option, you don't want your teammates changing code to accomodate for different stages. My current "hot idea" is using nix specializations. A default one for the regular runtime configuration (withexisting
to replace failed nodes), and a bootstrap one for... bootstrapping. I just haven't figured out yet, how to use them with colmena.Second issue: Even after setting
initialClusterState = "existing"
, the other 2 nodes will reject the reinstalled first node. You need to remove the broken node from the cluster, and add it again to make things work. This is definitely not an issue caused by nix, but it highlights the issue very well. There's a lot of software out there preferring runtime configuration changes through apis, clis and more. Declarative configuration? Nobody got time for that.This is not meant to be a rant. I'm looking for ideas, best practices, the "nix way" of doing things. Issue 1 is the bigger problem for me right now. The second issue can be worked around using scripts, defined procedures, maybe ansible. "Before you reinstall that node, run following commands on a functional cluster member: etcdctl member remove...". A lot of documentation, less automation.
Someone please save me from more headache...
The text was updated successfully, but these errors were encountered: