Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/ceremony files #730

Closed
wants to merge 51 commits into from
Closed

Feature/ceremony files #730

wants to merge 51 commits into from

Conversation

benbierens
Copy link
Contributor

No description provided.

Copy link
Contributor

@dryajov dryajov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that coupling the circuit retrieval mechanism and the backend/proover is the right approach. This makes things cluncky and unnecessarilly complex to handle, specially when external mechanisms can do this much more flexibly.

@benbierens
Copy link
Contributor Author

I don't think that coupling the circuit retrieval mechanism and the backend/proover is the right approach. This makes things cluncky and unnecessarilly complex to handle, specially when external mechanisms can do this much more flexibly.

I don't think I understand. This retrieval mechanism can work for any files that any backend/prover might need in the future. The marketplace contract is in charge of identifying the files, so this way we can guarantee compatibility.

If you have a better mechanism in mind, please explain.

@benbierens benbierens linked an issue Mar 12, 2024 that may be closed by this pull request
@AuHau
Copy link
Member

AuHau commented Mar 12, 2024

specially when external mechanisms can do this much more flexibly

Can you please expand on the "external mechanisms"?

@dryajov
Copy link
Contributor

dryajov commented Mar 12, 2024

specially when external mechanisms can do this much more flexibly

Can you please expand on the "external mechanisms"?

Bash script

@AuHau
Copy link
Member

AuHau commented Mar 12, 2024

Bash script

Yeah, I would not be a big fan of that, because of the UX it would bring. I assume that it would simply download the files into the default location where Codex expects them?

Some questions then:

  • How would the script be distributed?
  • How would the script know what files from where to download? Would that be user input? So users would have to search in doc or somewhere else what file for which environment to download?
  • When talking about "supporting different environments", how the user would know which files he currently has (eq. for which environment)?

@benbierens benbierens force-pushed the feature/ceremony-files branch from b8c88a5 to 50ab23d Compare March 12, 2024 18:16
@dryajov
Copy link
Contributor

dryajov commented Mar 12, 2024

Bash script

Yeah, I would not be a big fan of that, because of the UX it would bring. I assume that it would simply download the files into the default location where Codex expects them?

The point is that, we're coupling a particular distribution mechanism which limits us to use only that mechanism, this is in general quite inflexible.

An external retrieval mechanism can use codex, http, ipfs, bittorrent - you name it. This is also the reason why I ultimately decided on using cli arguments instead of a more tightly coupled mechanism.

Some questions then:

  • How would the script be distributed?

Once we start providing binaries, there should be a startup/setup wrapper script that is executed, similar to how nimbus does this. Btw, this doesn't even have to be a bash script, could be a nim script itself...

  • How would the script know what files from where to download? Would that be user input? So users would have to search in doc or somewhere else what file for which environment to download?

Keep it simple for now, just store them in some well known http location and use regular curl to retrieve them, we could also use codex itself for that later on, but this again, adds unnecessary complexity at this moment.

  • When talking about "supporting different environments", how the user would know which files he currently has (eq. for which environment)?

This should be well known hashes, which themself could be hardcoded into the application and/or the contracts - this is IMO, the maximum amount of coupling that should exist between the supporting files and proving system.

@benbierens
Copy link
Contributor Author

Bash script

I agree with Adam. Bash scripts and the tools they use like to be slightly different across different platforms.

Yeah, I would not be a big fan of that, because of the UX it would bring. I assume that it would simply download the files into the default location where Codex expects them?

The point is that, we're coupling a particular distribution mechanism which limits us to use only that mechanism, this is in general quite inflexible.

An external retrieval mechanism can use codex, http, ipfs, bittorrent - you name it. This is also the reason why I ultimately decided on using cli arguments instead of a more tightly coupled mechanism.

CLI arguments are still there. Nothing stops you from downloading files yourself however you like and passing them to Codex.

Some questions then:

  • How would the script be distributed?

Once we start providing binaries, there should be a startup/setup wrapper script that is executed, similar to how nimbus does this. Btw, this doesn't even have to be a bash script, could be a nim script itself...

If we put this nim script inside Codex, it's exactly what we've got here. It's handy too: No extra files or first-time-use steps to consider.

  • How would the script know what files from where to download? Would that be user input? So users would have to search in doc or somewhere else what file for which environment to download?

Keep it simple for now, just store them in some well known http location and use regular curl to retrieve them, we could also use codex itself for that later on, but this again, adds unnecessary complexity at this moment.

The solution we already have for this is better: Marketplace knows which URL/hash (future CID) it is compatible with. (It flows nicely from the build pipelines, I think.) We do currently use curl and that's unfortunate. But because the script is inside Codex, it has easy access to the marketplace to get the URL, and in the future, it can probably pretty easily switch to using the networkstore instead of curl.

  • When talking about "supporting different environments", how the user would know which files he currently has (eq. for which environment)?

This should be well known hashes, which themself could be hardcoded into the application and/or the contracts - this is IMO, the maximum amount of coupling that should exist between the supporting files and proving system.

I believe this is exactly the level of coupling that exists between the files, the proving backend, and Codex.

@dryajov
Copy link
Contributor

dryajov commented Mar 13, 2024

Bash script

I agree with Adam. Bash scripts and the tools they use like to be slightly different across different platforms.

Yeah, I would not be a big fan of that, because of the UX it would bring. I assume that it would simply download the files into the default location where Codex expects them?

The point is that, we're coupling a particular distribution mechanism which limits us to use only that mechanism, this is in general quite inflexible.
An external retrieval mechanism can use codex, http, ipfs, bittorrent - you name it. This is also the reason why I ultimately decided on using cli arguments instead of a more tightly coupled mechanism.

CLI arguments are still there. Nothing stops you from downloading files yourself however you like and passing them to Codex.

Some questions then:

  • How would the script be distributed?

Once we start providing binaries, there should be a startup/setup wrapper script that is executed, similar to how nimbus does this. Btw, this doesn't even have to be a bash script, could be a nim script itself...

If we put this nim script inside Codex, it's exactly what we've got here. It's handy too: No extra files or first-time-use steps to consider.

  • How would the script know what files from where to download? Would that be user input? So users would have to search in doc or somewhere else what file for which environment to download?

Keep it simple for now, just store them in some well known http location and use regular curl to retrieve them, we could also use codex itself for that later on, but this again, adds unnecessary complexity at this moment.

The solution we already have for this is better: Marketplace knows which URL/hash (future CID) it is compatible with. (It flows nicely from the build pipelines, I think.) We do currently use curl and that's unfortunate. But because the script is inside Codex, it has easy access to the marketplace to get the URL, and in the future, it can probably pretty easily switch to using the networkstore instead of curl.

  • When talking about "supporting different environments", how the user would know which files he currently has (eq. for which environment)?

This should be well known hashes, which themself could be hardcoded into the application and/or the contracts - this is IMO, the maximum amount of coupling that should exist between the supporting files and proving system.

I believe this is exactly the level of coupling that exists between the files, the proving backend, and Codex.

the script is not inside codex, the download code is inside codex, I insist we move this out of the core executable.

@dryajov
Copy link
Contributor

dryajov commented Mar 13, 2024

the script is not inside codex, the download code is inside codex, I insist we move this out of the core executable.

What I mean is that, this is purely supporting code that has nothing to do with Codex's main flow, we can take the existing code and put it in a nims file and run it externally as a setup script. It can invoke curl as it does now, or even codex itself.

This accomplishes two things:

  1. This logic doesn't have to be supported as part of the main codex application, this reduces complexity and churn in the main codex app
  2. We can fine tune or completely replace this logic/flow, again without making any changes to the main application

P.S.

I so far said bash and nims - what I mean is that I don't really care how and on what language the supporting retrieval script is implemented and I guess this is yet another benefit, since it's external we have this choices. But if we make it a nim scrip, we can just literally copy the current code already implemented here (with a few changes) and be done with it.

@tbekas
Copy link
Contributor

tbekas commented Mar 13, 2024

Just adding my 2 cents: I'm fan of apps that require minimal configuration (or no configuration at all). I'm fine with trading a little bit of additional nim code for a more streamlined user experience.

@benbierens
Copy link
Contributor Author

benbierens commented Mar 14, 2024

The problem I see with moving the download code outside of Codex is: It depends on some very specific parts of Codex to work, that would then need to be exposed (via CLI or REST) and those would be useful only for this very specific purpose:

  • The script needs to run Codex to get the marketplace-compatible CID or hash or URL out.
  • Then run Codex again to download the CID (which would right now be just curl.)
  • Then we unpack and put the files in the right places
  • And then we can run Codex for a third time, then it's actually ready to be used for proving.

Even if we can make it so that we can do the CID/URL getting and downloading in the same run, you'd still need to restart in order for the files to be passed as CLI args. And for what? less coupling? The coupling is already there because of the dependency of Codex (storage node) on the marketplace, and the proving backend. Moving the code out, or keeping these parts in the same or different repositories doesn't change that.

It sounds to me like you are asking work to be done in order to make the system more difficult to work with, for both users and developers.

As I said in our meeting last night: I don't see the scenario in which the current approach causes a problem. So I ask you to please write the step-by-step situation (in which something happens or changes) which would make the current flow a problem. It's OK if this scenario is hypothetical and unlikely to actually happen. I really want to understand exactly what you are worried about here.

@benbierens benbierens changed the base branch from node-wire-prover to master March 14, 2024 08:29
@benbierens benbierens marked this pull request as ready for review March 14, 2024 08:30
@AuHau
Copy link
Member

AuHau commented Mar 14, 2024

💯% agree with Ben here

@veaceslavdoina veaceslavdoina force-pushed the feature/ceremony-files branch from 2a11d41 to 3c17e13 Compare June 26, 2024 02:42
@veaceslavdoina veaceslavdoina force-pushed the feature/ceremony-files branch from b5b5f34 to 471ebb2 Compare June 27, 2024 05:56
@benbierens
Copy link
Contributor Author

This is no longer needed. Handled as part of #882
Finally I can run the marketplace dist-tests without the need for a special branch and image! Woo!!

@benbierens benbierens closed this Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Downloading Ceremony File
5 participants