Skip to content

Conversation

@jadeallenx
Copy link
Contributor

Previously, wait_until_read function would execute
another rt:systest_read call and return a count of
errors, when we already asserted above that the list
is empty.

If the wait_until function does not return true, or
times out, or hits the retry limit, the running test
ought to fail anyway without needing an additional
read to fail.

So doing another call is redundant and unnecessary;
we just return 0 as the value if rt:wait_until
completes successfully.

Previously, wait_until_read function would execute
another rt:systest_read call and return a count of
errors, when we already asserted above that the list
is empty.

If the wait_until function does not return true, or
times out, or hits the retry limit, the running test
ought to fail anyway without needing an additional
read to fail.

So doing another call is redundant and unnecessary;
we just return 0 as the value if rt:wait_until
completes successfully.
@jadeallenx
Copy link
Contributor Author

This PR is intended to address an issue we looked at while reviewing #931 - this particular function is called all over the repl tests, so some care ought to be used before merging.

@jadeallenx
Copy link
Contributor Author

repl_aae_fullsync.erl
repl_aae_fullsync_bench.erl
repl_aae_fullsync_custom_n.erl
repl_rt_heartbeat.erl
repl_rt_overload.erl
repl_rt_pending.erl
repl_util.erl
replication2.erl
replication2_dirty.erl
rt_cascading.erl

These modules have calls to repl_util:wait_for_reads

@jadeallenx jadeallenx changed the title Return 0 instead of doing another rt:sysread Return 0 instead of doing another rt:sys read in repl_util:wait_for_reads/5 Dec 1, 2015
@jadeallenx jadeallenx changed the title Return 0 instead of doing another rt:sys read in repl_util:wait_for_reads/5 Return 0 instead of doing another rt:sysread in repl_util:wait_for_reads/5 Dec 1, 2015
@hazen
Copy link

hazen commented Dec 1, 2015

./riak_test -c rtdev -t repl_aae_fullsync \
> -t repl_aae_fullsync_bench \
> -t repl_aae_fullsync_custom_n \
> -t repl_rt_heartbeat \
> -t repl_rt_overload \
> -t repl_rt_pending \
> -t repl_util \
> -t replication2 \
> -t replication2_dirty \
> -t rt_cascading
Test Results:
replication2-bitcask              : pass
repl_rt_pending-bitcask           : pass
repl_rt_overload-bitcask          : pass
repl_rt_heartbeat-bitcask         : pass
repl_aae_fullsync_custom_n-bitcask: pass
repl_aae_fullsync_bench-bitcask   : pass
repl_aae_fullsync-bitcask         : fail
rt_cascading-bitcask              : pass
replication2_dirty-bitcask        : pass
---------------------------------------------
1 Tests Failed
8 Tests Passed
That's 88.88888888888889% for those keeping score

@hazen
Copy link

hazen commented Dec 1, 2015

It looks like repl_aae_fullsync fails quite a bit for unrelated reasons.

@jadeallenx
Copy link
Contributor Author

@JeetKunDoug has been wrestling with this test too - I think its failures are unrelated to this particular change.

@hazen
Copy link

hazen commented Dec 1, 2015

@mrallen1 I tested this and it looks good, however, I'm looking at the usage in the tests and they are testing a single node, not an array of nodes, so the assert we expected to happen earlier in https://github.com/basho/riak_test/blob/mra/repl-util-read-fix/src/rt.erl#L675 does not happen. So if there is an error, it now reports it as having no problems.

@jadeallenx
Copy link
Contributor Author

@javajolt From my reading of rt:wait_until/2 we should still be ok, because we assert the return value of the function is ok on line 198. rt:wait_until will eventually return {fail, Result} if the Reads is not empty over the span of retry attempts.

@hazen
Copy link

hazen commented Dec 2, 2015

Fair enough. The length will only be 0 if it succeeds anyway.
👍

hazen pushed a commit that referenced this pull request Dec 2, 2015
Return 0 instead of doing another rt:sysread in repl_util:wait_for_reads/5
@hazen hazen merged commit be37c6b into riak/2.0 Dec 2, 2015
@hazen hazen deleted the mra/repl-util-read-fix branch December 2, 2015 19:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants