-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/ceremony files #730
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that coupling the circuit retrieval mechanism and the backend/proover is the right approach. This makes things cluncky and unnecessarilly complex to handle, specially when external mechanisms can do this much more flexibly.
I don't think I understand. This retrieval mechanism can work for any files that any backend/prover might need in the future. The marketplace contract is in charge of identifying the files, so this way we can guarantee compatibility. If you have a better mechanism in mind, please explain. |
Can you please expand on the "external mechanisms"? |
Bash script |
Yeah, I would not be a big fan of that, because of the UX it would bring. I assume that it would simply download the files into the default location where Codex expects them? Some questions then:
|
b8c88a5
to
50ab23d
Compare
The point is that, we're coupling a particular distribution mechanism which limits us to use only that mechanism, this is in general quite inflexible. An external retrieval mechanism can use codex, http, ipfs, bittorrent - you name it. This is also the reason why I ultimately decided on using cli arguments instead of a more tightly coupled mechanism.
Once we start providing binaries, there should be a startup/setup wrapper script that is executed, similar to how nimbus does this. Btw, this doesn't even have to be a bash script, could be a nim script itself...
Keep it simple for now, just store them in some well known http location and use regular curl to retrieve them, we could also use codex itself for that later on, but this again, adds unnecessary complexity at this moment.
This should be well known hashes, which themself could be hardcoded into the application and/or the contracts - this is IMO, the maximum amount of coupling that should exist between the supporting files and proving system. |
I agree with Adam. Bash scripts and the tools they use like to be slightly different across different platforms.
CLI arguments are still there. Nothing stops you from downloading files yourself however you like and passing them to Codex.
If we put this nim script inside Codex, it's exactly what we've got here. It's handy too: No extra files or first-time-use steps to consider.
The solution we already have for this is better: Marketplace knows which URL/hash (future CID) it is compatible with. (It flows nicely from the build pipelines, I think.) We do currently use curl and that's unfortunate. But because the script is inside Codex, it has easy access to the marketplace to get the URL, and in the future, it can probably pretty easily switch to using the networkstore instead of curl.
I believe this is exactly the level of coupling that exists between the files, the proving backend, and Codex. |
the script is not inside codex, the download code is inside codex, I insist we move this out of the core executable. |
What I mean is that, this is purely supporting code that has nothing to do with Codex's main flow, we can take the existing code and put it in a This accomplishes two things:
P.S. I so far said bash and nims - what I mean is that I don't really care how and on what language the supporting retrieval script is implemented and I guess this is yet another benefit, since it's external we have this choices. But if we make it a nim scrip, we can just literally copy the current code already implemented here (with a few changes) and be done with it. |
Just adding my 2 cents: I'm fan of apps that require minimal configuration (or no configuration at all). I'm fine with trading a little bit of additional nim code for a more streamlined user experience. |
The problem I see with moving the download code outside of Codex is: It depends on some very specific parts of Codex to work, that would then need to be exposed (via CLI or REST) and those would be useful only for this very specific purpose:
Even if we can make it so that we can do the CID/URL getting and downloading in the same run, you'd still need to restart in order for the files to be passed as CLI args. And for what? less coupling? The coupling is already there because of the dependency of Codex (storage node) on the marketplace, and the proving backend. Moving the code out, or keeping these parts in the same or different repositories doesn't change that. It sounds to me like you are asking work to be done in order to make the system more difficult to work with, for both users and developers. As I said in our meeting last night: I don't see the scenario in which the current approach causes a problem. So I ask you to please write the step-by-step situation (in which something happens or changes) which would make the current flow a problem. It's OK if this scenario is hypothetical and unlikely to actually happen. I really want to understand exactly what you are worried about here. |
💯% agree with Ben here |
# Conflicts: # vendor/codex-contracts-eth
# Conflicts: # codex/contracts/market.nim # codex/contracts/marketplace.nim
# Conflicts: # .gitmodules # docker/codex.Dockerfile
2a11d41
to
3c17e13
Compare
b5b5f34
to
471ebb2
Compare
This is no longer needed. Handled as part of #882 |
No description provided.