Scalable storage onboarding and maintenance (project Neutron) #240
Replies: 3 comments
-
I think maybe more resources should be allocated toward onboarding/retrieval of real-world data or vm instead of further upgrading network bandwidth/capacity. |
Beta Was this translation helpful? Give feedback.
-
@anorth does the team have a timeline and/or roadmap for specs on Project Neutron? |
Beta Was this translation helpful? Give feedback.
-
We're not planning to work on Neutron until the growth in network committed storage, projected into the future, threatens to exceed the processing capacity of the network. At current growth rates, the current implementation will be fine for years. So other projects that directly improve protocol utility are taking our focus first. |
Beta Was this translation helpful? Give feedback.
-
Some people from the Filecoin team have been working on the next iteration of scalable storage growth and capacity for the Filecoin network. The recent Hyperdrive network upgrade unlocked a big multiple of capacity, but we expect mining demand to rise over time to meet this and again be limited by blockchain throughput. In the next iteration of improvements we aim to solve this problem for the long term, enabling exponential network growth. This effort is known as project Neutron (after the density of neutron stars).
We're still fleshing out many details ahead of a full FIP, but I'm filing this issue to show where we're headed and as a reference for other efforts. We'll publish more extensive design documents once we're more confident in the approach.
@nicola @Kubuxu @nikkolasg
Background
The Filecoin network’s capacity to onboard new storage and to maintain proofs of committed storage are limited by blockchain transaction processing throughput. The recent Hyperdrive network upgrade raised onboarding capacity to about 500-1000 PiB/day, but we expect this capacity to become saturated.
As onboarding rates increase, the fixed amount of network throughput consumed by maintaining the proofs of already-committed storage will increase, eventually toward a significant cost for the network.
Problem detail
Validation of the Filecoin blockchain is subject to a fixed amount of computational work per epoch (including state access), enforced as the block gas limit. Many parts of the application logic for onboarding and maintaining storage incur a constant computational and/or state cost per sector. This results in blockchain validation costs that are linear in the rate of growth of storage, and in the total amount of storage committed.
Linearities exist in:
We wish to remove or reduce all such linear costs from the blockchain validation process in order to remove limitations on rate of growth, now and in the long future when power and growth are significantly (exponentially) higher. SNARKPack goes a long way toward addressing the linear cost of PoRep and PoSt proof verification. However, there remain linear costs associated with providing the public inputs for each sector’s proof.
Goals
Our goal is to enable arbitrary amounts of storage to be committed and maintained by miners within a fixed network transaction throughput.
This means redesigning storage onboarding and maintenance state and processes to remove linear per-sector costs, or dramatically reduce constants below practical needs. We want to do this while maintaining security, micro- and macro-economic attractiveness, discoverable and verifiable information about deals, and reasonable miner operational requirements.
This effort is seeking a solution that is in reach for implementation in the next 3-6 months (which means relying on PoRep and ZK proof technologies that already exist today), and that is good enough that we won’t have to re-solve the problem within a few years
Of course there exist other, orthogonal approaches to the general problem of scaling, but these are generally longer and harder propositions (e.g. sharding, layer 2 state).
Out of scope
This proposal does not attempt to solve exponential growth in deals, except by making it no harder to solve that problem later. We think this sequencing is reasonable because (a) deals are in practise rare at present, and (b) off-chain aggregation into whole-sector-size deals mitigates costs in the near term. We expect exponential deal growth to be a challenge to address in 2022.
Key ideas
The premise behind this proposal is that we cannot store or access a fixed-size piece of state for each 32 or 64 GiB sector of storage, either while onboarding or maintaining storage. Specifically, we cannot store or access a replica commitment (CommR) per sector, nor mutate per-partition state when accounting Window PoSt. CommR in aggregate today accounts for over half of the state tree at a single epoch, and Window PoSt partition state manipulation dominates the cost of maintenance.
The key design idea is to maintain largely the same data and processes we have today, but applied to an arbitrary number of sectors as a unit. The proposal will redesign state, proofs and algorithms to enable a miner to commit to and maintain units of storage larger than one sector, with cost that is logarithmic or better in the amount of storage. Thus, with a fixed chain processing capacity, the unit of accounting and proof can increase in size over time to support unbounded storage growth and capacity. We will assume that miners will increase their unit of commitment if blockchain transaction throughput is near capacity.
Beta Was this translation helpful? Give feedback.
All reactions