Skip to content

Performance

brianjo edited this page Oct 15, 2014 · 30 revisions

##Speed In this initial test, we measure the time replication for a key/value pair to become available on another region replica by writing 1k key/value pairs to Dynomite in one region, then polling the other region randomly for the 20 keys. The value for each key in this case is just timestamp when the write action starts. Then, the client on the other region just reads back those timestamps and compute the durations. We repeat this same experiment several times and take the average. From this we can derive a rough idea of the speed of the replication.

We expect this latency to remain more or less constant as we add code path optimization as well as enhancements in the replication strategy itself (optimizations will improve speed, features will potentially add latency).

Result:
For 5 iterations of this experiment, the average duration for replications is around 85ms. (note that the duration numbers are measured at the client layers so the real numbers should be smaller).
##Consistency In this initial test, we measured the date consistency for writes to become available on all range-owning replicas (of the milestone 1 codebase) by writing 1000 key/value pairs to Dynomite in one region, then reading the other region for the same keys. From this we can derive the accuracy of the consistency.

In measuring the consistency of replication, after running the ‘insert’ script described above, by querying the other region for the full key set independently and comparing the results.. As of 16 Jan 2014, there was a convergence rate of 100%; that is, 100% of the data was consistent between the two regions. However, we expect there will be some consistency errors as we stress the same system with more traffics. To handle those, we need to implement the hints retries or repair or various techniques.