-
Notifications
You must be signed in to change notification settings - Fork 30
Notes from discussion with Jeromy - path towards the IPFS npm's companion #65
Comments
@whyrusleeping what would be necessary to get 0.4.0 to be able to read other mfs's through a custom pubkey? |
@diasdavid could you write out your usecase for me? |
What I want is to be able to do It would require us to
|
cc @mappum |
@diasdavid so the fat node could just do an ipns publish of its mfs directory representing the npm cache. Then the node who is running the npm install could do |
but every time that the fat node updates, the client nodes would have to re-run the copy command if they want to stay up to date. Some package managers leave that step up to the user, eg. cache the repo state for up to a day, or until the user runs |
It's more in-line with how npm works to only update when the user explicitly wants to, npm versions are already immutable (they won't let you publish changes to a version that is already published). BTW, the fat node is good as a fallback to ensure everything is always available on the IPFS network, but it makes a lot of sense to also have nodes that install packages provide them (then you can just fetch packages over the LAN, have high availability, etc.). Also, I think it will be important for package authors to add the IPFS hashes of their dependencies in their |
Also, it would be really cool to make a decentralized registry, where many people run that fat node (it's feasible since it probably dedupes to being a lot smaller than the total 200+ GB, and also not everyone will need to have all the packages). Then users installing a package can just check the package hash from each (or many) of those node's registries and ensure they all match. This prevents attacks where registry operators maliciously modify the code. |
But wouldn't that make it that each user would download the entire npm through IPFS?
Right now is half a TB (the readme on registry-static is not up to date) and I'm not sure if it would dedup that much since our chunking algo won't take how code is divided into account
This would be awesome! But also would break the flexibility of semver, the hash of each module could be a IPNS hash that would point to the latest version and all the other versions before. Nevertheless, let's get the use case of installing from IPFS done, without having to change how the ecosystem works today. So back on: If yes, we can make the baseDir being a copy and this If not, we have to have a way to 'mount' /ipns/QmHASH-of-fat-node/npm-registry/all-the-modules* locally (with lazy load) so that registry-static can use it and ask IPFS to download a module when it needs it. |
nope, |
Awesome, let's try it then :) We need:
|
@diasdavid hmm... I have a machine with a lot of disk space, but i've been holding off on using it for ipfs stuff... I suppose I could set you up an account on it. It only has a 60Mbit symmetric link though. not gigabit. |
it will dedup a ton when you use I have fiber in NYC, but won't be there until 10/24ish |
It's aliveeee https://github.com/diasdavid/registry-mirror :) |
Awesome!!! :)
|
👏 👏 |
@whyrusleeping ipfs-blob-store ipfs-shipyard/ipfs-blob-store#5 ;)
The text was updated successfully, but these errors were encountered: