-
Notifications
You must be signed in to change notification settings - Fork 54
WIP: add remote executor and tank roles #396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: add remote executor and tank roles #396
Conversation
| ) | ||
| self.log.info(f"Finished unpacking config data for {service_name} to {destination_path}") | ||
|
|
||
| def copy_file_to_pod(self, pod_name, src_path, dest_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hate this function.
Also, I noticed testing this out that a tonne of things are way easier via kubectl exec -it warnet-tank-<index>. This is what I was missing since we dropped compose; hop into the container with a single command.
It made me think though, I think a tonne of our functionality could be re-written to use the k8s client object directly, and end up a lot neater...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also look in to using the sidecar image I made for simln: https://hub.docker.com/r/pinheadmz/sidecar. One benefit of this is that it is paired with a shared volume so if the container restarts, the data is still there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting... will investigate
pinheadmz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super-duper in love with this approach, especially if using threads in scenarios continues to work out OK. I also wonder if, when a user needs something like this, can they just create a one-off image with a new entrypoint file that runs some extra stuff?
| ) | ||
| self.log.info(f"Finished unpacking config data for {service_name} to {destination_path}") | ||
|
|
||
| def copy_file_to_pod(self, pod_name, src_path, dest_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also look in to using the sidecar image I made for simln: https://hub.docker.com/r/pinheadmz/sidecar. One benefit of this is that it is paired with a shared volume so if the container restarts, the data is still there.
That seems to me to be pretty much directly equivalent, although more difficult for end-users? With this approach all they need to do is add a graph property and drop an executable in a dir. Re-creating an entire image with a new entrypoint.sh feels more difficult than this, to me. I do agree however that, if we can essentially count on "unlimited threads", then a threaded, centralised co-ordinator may well be superior for bitcoind/lnd/cln orchestration. That said, this approach would let you insert arbitrary bash scripts which could include doing systems things to nodes like:
...very easily. None of which are currently possible today. |
fd4484f to
1402ade
Compare
1216e38 to
195ceca
Compare
195ceca to
84a9074
Compare
Some real commits and some example commits to-be-removed.
Addresses the idea in #272
This installs a "file watcher" inside the tank docker image, which will watch
/tmp/exe/for executable files and run them as soon as they are added.Add the ability to assign a "role" to a tank on the graph. This will (stupidly) attempt to copy a script from
scripts/{role}.shinto the tank/tmp/exe/during warnet creation. This happens after the file watcher is started, (and likely bitcoind is initialised) so it will immediately execute.A simple miner script, which mines blocks every 0 - 20 seconds, is applied as the role to tank
0.Currently this uses a custom image (x86_64 linux only, I think) for the tank, with the new executor program installed, but this could obviously be rebuilt into the main images if we move forward with it.
This should let us (and hackers?) make the nodes do pretty wild things, unconstrained by
rpc-0's load/RPC timings.Could also be used as a start to achieving #390