-
Notifications
You must be signed in to change notification settings - Fork 30
Tips for adding large datasets into ipfs #212
Comments
I like it 👍 Can we just move it to the docs/ directory in go-ipfs? I'm afraid here it'll get lost quickly. |
How about making a blogpost about it for extra discoverability? |
Not duplicating files when adding (experimental)From: ipfs/kubo#3397 (comment) Change the config to set
Before adding files, create a new repository and run daemon at the same level as the directory you want to add
How the directory hierarchy should look now:
Then when adding files, pass the
|
Fetching large filesMuch of the opening comment apply, but you'll also want to start the receiving daemon with |
Hi, I need to add about 1kkk tiny objects(about 100bytes) using IPFS and create single CID to allow easy pinning for others nodes. Can you, please, give me advise: how to load objects in my case? |
I would make sure to use badger as the datastore. Then I would use 'ipfs
files' to create the virtual directory one item at a time.
…On Thu, Jan 17, 2019, 7:09 AM Hleb Albau ***@***.***> wrote:
Hi, I need to add about 1kkk tiny objects(about 100bytes) using IPFS and
create single CID to allow easy pinning for others nodes.
Firstly, I try to create a directory with files, containing those objects
and using CLI add it recursively. The main problem here is OS file min
block size is 4k, so for 1kkk objects, I need 4tb disk space.
Can you, please, give me advise: how to load objects in my case?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#212 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABL4HIS4sVU8gTkxFPZfp-JWc8OX4kl1ks5vECG_gaJpZM4LdZGN>
.
|
Hi, thanks for the fast reply.
|
Take a look at Also, i would ask people on IRC for faster help on this. It’s likely that I will miss a notification here and leave you hanging for a long time. |
Some of my thoughts on adding lots of data to ipfs:
go-ipfs is currently still alpha software. It is designed to handle absurdly
huge amounts of data across vast expanses of spacetime, but our current
implementation has its fair share of inefficiencies. This guide will serve as
a collection of optimization notes and best practices for efficiently storing
large amounts of information in ipfs.
Daemon Configuration
This section discusses configurations to apply before starting the process of
ingesting data into ipfs.
Set flatfs 'NoSync'
ipfs config --json Datastore.NoSync true
Ipfs currently stores all data blocks in flat files on disk. There is quite a
ways to go in optimizing this storage engine, but one quick optimization for
now is to disable some excess fsync calls made by the code. The drawback this
has is that if the machine ipfs is running on unexpectedly crashes (without
proper disk unmounting) then some recently added data may be lost.
Disable Reproviding
ipfs config Reprovider.Interval "0"
By default, the ipfs daemon will announce all of its content to the dht once a
day. This works great for small to medium sized datasets, but for huge datasets
this becomes incredibly costly. Until we optimize the content routing system
(see: #162), it's best to disable this
feature.
Directory sharding
ipfs config --json Experimental.ShardingEnabled true
If your dataset contains huge directories (1k+ entries) sharding will enable ipfs to handle those better (without it might hang or crash without any notice).
The Add Process
The primary way to get data into ipfs is through the
ipfs add
command.There are a few optimizations here and different things to note that will aid
in efficiently getting data ingested.
'Local' adding
When content is added to ipfs in this way, we automatically start announcing
the content to the dht as it is added. For huge masses of data, we would prefer
not to do that given the cost. To avoid this, pass the
--local
flag wheninvoking ipfs add. For example:
Raw Leaves
All file data that goes into ipfs is broken into chunks, and built into a
merkledag. Initially, the leaf nodes of the dag had some amount of framing.
Recently (still in master at time of writing, should ship in 0.4.5) we added an
option to add that allows us to create leaf data nodes without that framing.
This cuts roughly 12bytes per 256k chunk off, but the real benefit it provides
is making the blocks stored on disk evenly divisible by 4096, resulting in
fewer wasted disk blocks.
Example:
Breaking Up Adds
ipfs add calls are not currently interuptable, if something happens during the
add you will have to restart from the beginning (though previously added
segments will progress much more quickly). To mitigate this risk, it is
generally advisable to add smaller amounts of data and then patch the peices
together afterwards. This process might look something like this:
The text was updated successfully, but these errors were encountered: