-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement datastore
#13
Comments
@diasdavid blocks is just the name we give to a serialized ipfs object. Raw data. You don't need it in your code necessarily. Maybe this helps:
There's a diagram somewhere, I'll look for it. |
(Ipld and multicodec are newer than blocks so some there's more overlap than we'd have if these abstractions were created from scratch together) |
@diasdavid does that make sense? lmk if not |
Trying to wrap my mind around what would be the lifecycle of an file when added to IPFS. What data does datastore folder have then (currently and near future)? Blocks is making sense, but having a blocks folder and a datastore folder is not. |
conceptually,
|
As talked on Sprint Meeting Dec 14, blocks will be 'datastore' and datastore with level db will be 'datastore-legacy' |
blocks
datastore
(probably going to ask the obvious) The objects stored on this datastore, are serialised versions of the protobufs, right? Currently all the blocks have the .data extension, what is the migration plan for when we use IPLD + CBOR? Migrate and have .json objects, so we can both read from old repos and new repos at the same time? Are we dropping protobufs soon enough that we can make the js impl only understand IPLD + CBOR? |
Yes
New objects are done with cbor. old objects still readable. see https://github.com/ipfs/go-ipld/tree/master/coding/pb
Turns out we will need to still have the protobuf implementation to read old objects. yeah annoying, but we'll get them on the wire from other people, or even if we get them in json, we'll need to be able to serialize with protobuf to hash and verify. So we support protobuf in a "backwards compatible" kind of way. which is unfortunate, but not so bad with
No, we'll have to make it understand old objects. Sorry. On the bright side, mafintosh wrote a very nice protobuf parser: https://github.com/mafintosh/protocol-buffers I think we should use this whenever "breaking links" or "breaking old data" is a possibility: |
that said, we could migrate the protobufs to |
With #20 we have a impl capable of reading/writing blocks into the datastore following the same folder structure go-ipfs does. |
So I started making an implementation of the go-ipfs merkledag for use in the browser as a data structure and I am trying to get a sense of how the block store was working here. Is it implemented in this project? I couldn't get a sense where this left off. I can implement it if its not because I need it to finish off the dagservice but I needs to know the assumptions. I have implemented the block, node, and link data structures and they are tested (knock on wood). I'll separate the block related stuff into a seperate project. If I am following this right it has to implement both the datastore api and the abstract-blob-store? |
@vijayee 'block store' is now known as 'datastore', as it was in the beginning, but with the change from levelDB to flatFS, we got datastore and blocks which was confusing, to make it clear that our intention is to move everything out of levelDB we have: datastore -> current blocks in go-ipfs As for implementing the MerkleDAG, we have currently started working on it here: https://github.com/ipfs/js-ipfs-data-importing, following the 'on going' Data Importing Spec here: ipfs/specs#57 |
It seems that |
@diasdavid lets get a proper document up for all the features, names and storage locations we need in js-ipfs-repo please. |
@dignifiedquire can it be part of the readme -- https://github.com/ipfs/js-ipfs-repo#background -- ? It really is following what go-ipfs does //cc @whyrusleeping |
Sure I just want to be sure to know what the end goal is when the work starts, rather than having to discover this part and that which is still missing. |
datastore is the thing touching the fs. get/put raw byte buffers
|
Correction: datastore was the thing touching the fs through levelDB, where blocks and DHT records where stored. Today, only DHT records and the roothash of the pinset live there. blockstore is the thing where blocks get stored into fs, through flatfs that offers get/put raw byte buffers block service is the thing on top of blockstore that offers get/put Block semantics |
@diasdavid not in Go. In Go, (which this issue claims to want to match) it's as I described. "Datastore" is a library touching both. That "$repo/datastore" was also used as the leveldb dir and not the flatfs one is historical. |
got it, it is just a matter of having too many things with the same name :) go-datastore is the equivalent of abstract-blob-store in JS (what @jbenet mentions), these are the modules (adapters) that we use to exchange the storage backends. IPFS Repo divides itself into several 'stores', namely: keys (not used yet), config, blocks, datastore, logs (not used anymore) and locks; see more info here: https://github.com/ipfs/specs/tree/master/repo#repo-contents Initially go-ipfs stored its blocks inside datastore, which used a levelDB adapter, then, when we stopped doing that, the migration was performed by creating a new folder, called 'blocks'. So now we have:
What we miss in js, is to create the interface that enables access to the datastore folder, that is, the levelDB that contains the DHT records and the pinset roothash. Currently, that 'datastore' folder is references in JS code as 'datastore-legacy' (see https://github.com/ipfs/js-ipfs-repo/tree/master/src/stores), and the blocks folder is references as 'datastore'. However, it makes sense now, since we never phased out the 'datastore folder / levelDB', to match the names, that is blockstore or blocks for the blocks folder and datastore to the levelDB one :) |
In go-ipfs we use
This passes Gets and Sets in with key starting with Blockstore then wraps datastore, uses I don't know if you want to go with this path in JS, but this is how it is done in Go. |
One thing to note is that we ARE turning the entire state of an IPFS node
|
Almost missed
blocks
as there is no reference in the Spec (https://github.com/ipfs/specs/blob/fix-repo/repo/README.md). I'm guessing is that datastore is becoming the blocks as the transition form levelDB to flatfs happens.@whyrusleeping @jbenet can I get some details on this?
To implement the fanout factor I'm going to use a blob-store that builds on top of fs-blob-store that knows how to do fanout (so that in the browser we don't have too)
The text was updated successfully, but these errors were encountered: