-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for IPFS #203
Comments
Looks interesting! I'd accept a PR with an IPFS backend, although it looks like this request is somewhat premature as the IPFS JS library is still in "alpha" state. |
Also I have no idea how unit tests would work; is it possible to run an isolated IPFS instance for that purpose? |
@jvilk Now that IPFS has their circuit-relay stuff figured out, the next release should be a lot more stable and compatible across more transports. With regards to testing, you can set up an IPFS node that doesn't have access to any other peers and has MDN disabled so it won't do peer discovery. I'd look at how they're doing unit tests to get a better picture of how they do it. |
Something that could be started up and stopped alongside the tests running in Travis CI and appveyor would be ideal, as I have no resources to run a dedicated testing server. |
You can start an IPFS node in either a node.js or browser context. |
I believe it is possible to run ipfs-js instance from node.js. |
I'm interested in working on this. I've been working with Orbit-db, which is built on top of IPFS's pubsub to allow CRDT append-only logs. Since each file/directory in IPFS is referenced by its hash, if any file changes you have a new reference. Essentially Orbit-db creates a file that describes the log: its name, type, and who can write to it, which is controlled by providing a set of public keys that must correspond to a signature provided with each new entry into the log. The pubsub feature that Orbit-db uses provide P2P WebSockets to connect all those interested in updates about a "topic." So when any new user arrives and is interested in an orbit-db, any connected user with the log can catch them up. So what I propose is the following: A filesystem (top level directory) is added to IPFS (it uses IndexedDB as to store the files, which are sharded) producing a hash reference. A corresponding orbit-db log is created and this becomes the permanent "reference" to the file system. The IPFS hash is then appended to the log. Then any subsequent change to the underlying file system results on it being added to IPFS producing a new hash and publishing the reference to the log. Since IPFS uses a merkle tree for its directory structure, each file with the filesystem is referenced by its hash. So when a subscriber gets a new reference to the filesystem it's a reference to a merkle tree. Since they still have all the unchanged files they only have to request from IPFS the changed files. Being new to BrowserFS, I'm unsure which abstraction to use to implement this. Also, since BrowserFS works with emscripten, you could use a file as a socket in wasm by subscribing to changes to the file. |
The most direct way to implement a file system is to extend Alternatively, you can implement the It seems like it might be easier to directly implement the file system interface on top of IPFS (the former paragraph) if IPFS already has a concept of a directory structure. Hope that helps! Also, please forgive my sluggish responses; I'm much less responsive than usual as I defend my PhD and move across the country in a month. |
Also, once you add a factory class that can create instances of your IPFS backend to the testing code, you can run BrowserFS's full suite of unit tests against your backend. |
I think new version of IPFS/JS(0.30) now supports the unix like Mutable File System (MFS) which would be helpful. |
Please use zen-fs/core#11 |
IPFS is a protocol that uses contrent-addressable file storage in a peer-to-peer network.
I think it would be useful to support it as a storage mechanism to reduce the amount of data that needs to actually be stored locally.
Instead of storing files on the local filesystem, one could store file metadata (names / created at) and have it point to an IPFS hash of that data, and when updating files, you would upload the new version to IPFS instead. This way one could load file data only when it's needed and free up user storage as well as improve the IPFS network by having more data stored between peers.
The text was updated successfully, but these errors were encountered: