Re-add old kv679 test and add new test for further dataloss edge case #1296
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
kv679_dataloss_fb.erltest was part of the original kv679 suite, but at the time of 2.1it was put in it's own branch. See #719 for discussion and decision. In short, Riak Test is made up of passing tests,
and kv679_dataloss_fb.erl still failed. Adding this back to a
branch off develop as I know have a fix of this issue.
There is also a new test,
kv679_dataloss_fb2.erl, that shows another dataloss edge case in riak, even in 2.1 withper-key-actor-epochs enabled. The test is a little convoluted, and is
based on a quickcheck counter example, included in the riak_kv/test
directory. In short, both this test, and the other kv679_dataloss_fb
test, show that even with multiple replicas acking/storing, a single
disk error on a single replica is enough to cause acked writes to be
silently and permanently lost. For a replicated database, that is bad