-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix bug in pinsets and add a stress test for the scenario #3273
Conversation
Don't merge this before I CR it-- i will do so in the next day or two. |
@jbenet SGTM |
@whyrusleeping can you explain more what's going on? we should have a long comment somewhere in the code explaining the algorithm, and then how this change affects it. |
@jbenet will do |
@jbenet added a large comment, and a third commit that cleans up the logic around the bugged area a bit to make things a little more obvious and readable |
@jbenet Can you review this? This is fairly important and its been 'a day or two' |
The fix looks solid to me, the bug was simple: the recursive fanout was done with fanout of 1<<32 and just later it was contracting it to fanout of 256 thus overriding fanned out keys with data if the lower 256 bits were the same. |
8a15a7a
to
cf4dab6
Compare
License: MIT Signed-off-by: Jeromy <[email protected]>
License: MIT Signed-off-by: Jeromy <[email protected]>
Switched from using a map to an array since the bounds are small and fixed. This should save us some significant time and on accesses License: MIT Signed-off-by: Jeromy <[email protected]>
cf4dab6
to
ec9ce83
Compare
To everybody reading this, please make sure to have backups of your pinsets: |
License: MIT Signed-off-by: Jeromy <[email protected]>
I removed all of the old 'multiset' code that made things much much more confusing. I also cleaned up a few different things and added a bunch more comments. I think its much easier to understand whats going on now. |
I have a program that is able to find 'lost' hashes if you havent run a garbage collection yet. https://github.com/whyrusleeping/ipfs-see-all I'll be updating the build instructions soon and providing pre-built binaries to download from dist.ipfs.io shortly. |
Hey @tv42 -- could you CR this and verify it's right? |
1f853c5 LGTM |
Doing some more tests, the bug is triggered any time we hit more than 8192 pins, very reliably. |
8192 is the moment we start hashing pins, from my analysis you will have almost 100% failure rate at 8192+256+1. |
But estimated length might be higher than 8192 even if pincount is lower than 8192 so we will start hashing earlier, if estimated length is greater or equal to 8192 you will need just 256+1 pins to trigger faulty code. |
The tests show a zero percent failure rate up until 8192 pins. After that, On Fri, Oct 7, 2016, 14:59 Jakub Sztandera [email protected] wrote:
|
Yes as then it will skip this: https://github.com/ipfs/go-ipfs/pull/3273/files#diff-15e7154f15253315d2a8ba7e1744d9e7L116 branch and proceed to split the 8192 pins into buckets. |
Gonna go ahead and merge this, no sense waiting any longer. |
Sometime after having ~5000 items in a pinset, we start to get some hash collisions when mapping the 32bit int space over an 8 bit integer space. The easy enough fix is to modulo the hash output down into our final key space before we even get to that point.
Longer term, i want to see us using the HAMT code for this purpose (though its not yet ready).
License: MIT
Signed-off-by: Jeromy [email protected]