Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 0.14.0 #1117

Closed
busbey opened this issue Mar 28, 2018 · 56 comments
Closed

Release 0.14.0 #1117

busbey opened this issue Mar 28, 2018 · 56 comments
Labels

Comments

@busbey
Copy link
Collaborator

busbey commented Mar 28, 2018

follow up from #981. There's already a fair bit of churn on master in the ~9 months since the 0.13.0-staging branch started.

Additionally, we have a good deal of testing work left over that never happened in the 0.13.0 RC process.

@busbey busbey added the release label Mar 28, 2018
@twblamer
Copy link
Contributor

Just curious, what testing work is needed? Is this a matter of manually testing the bindings, the individual features/PRs, or both?

@busbey
Copy link
Collaborator Author

busbey commented Apr 12, 2018

yep! we'll need to test a number of bindings and any specific fixes that are supposed to be present.

The first step is to get a list up of which bindings / features /etc. It's on my todo list, but I've been a bit swamped on a different project lately.

@busbey
Copy link
Collaborator Author

busbey commented Apr 12, 2018

generally speaking, testing a binding should be straight forward. It's essentially:

  1. Download RC and build binary bits
  2. stand up an instance of whatever backing datastore (cluster deploy optimal, but single-node is fine)
  3. follow the README for the specific binding for configuration
  4. run the recommended sequence of workloads
  5. post results here, with details about versions / sizes / etc.

using either the binding specific tarball or the omnibus is fine, but which one got used should probably be noted in the results.

@busbey
Copy link
Collaborator Author

busbey commented May 18, 2018

merged #908

@busbey
Copy link
Collaborator Author

busbey commented May 18, 2018

merged #1098 and #1147 .

looks like folks are in progress on #1051

@busbey
Copy link
Collaborator Author

busbey commented May 19, 2018

would like to get #1148 "Update HBase 2 version to 2.0.0 GA" in for polish on the hbase addition

@busbey
Copy link
Collaborator Author

busbey commented May 19, 2018

As a reminder, this release needs to cover essentially everything since 0.12.0. So the release notes will end up copying some things from the 0.13.0 release notes (like known incompatibilities)

This list of things to check presumes both #1148 and #1051 will land before I cut the feature branch.

New Topics

Needs a review because they'll get pushed in the release notes:

Datastore bindings that changed and will need to be tested to stay in the "verified" category

Newly added datastore bindings

Issues with core

@busbey
Copy link
Collaborator Author

busbey commented May 19, 2018

Another modified store I just merged:

@busbey
Copy link
Collaborator Author

busbey commented May 21, 2018

found some headers missing before making the staging branch. put up #1150 to fix it.

@twblamer
Copy link
Contributor

Looks like the two output fixes that were done for 0.13.0 aren't present in master: 88ffdbb and 46933b1

@busbey
Copy link
Collaborator Author

busbey commented May 24, 2018

excellent catch!

@busbey
Copy link
Collaborator Author

busbey commented May 24, 2018

okay I cherry-pick #1060, #1084, and #1085 to the master branch.

@busbey
Copy link
Collaborator Author

busbey commented May 24, 2018

@robertpang
Copy link
Contributor

robertpang commented May 25, 2018

@busbey Just tested 0.14.0-RC1 with Apache Cassandra and @YugaByte datastores with no issue. Here are the test config and results:

Datastore setup:

  • 3-node cluster on Google Cloud Platform
  • Each node is a n1-standard-16 in same availability zone with:
    • 16 vcpu’s, Intel® Xeon® CPU @ 2.20GHz CPUs
    • 60GB RAM
    • 2 x 375 GB direct attached SSD
    • OS: CentOS 7.5

YCBS client setup:

  • n1-standard-4 in the same availability zone

YCSB workload parameters:

  • recordcount=1000000
  • operationcount=10000000
  • maxexecutiontime=180
  • threadcount=256
  • cassandra.readconsistencylevel=QUORUM
  • cassandra.writeconsistencylevel=QUORUM

Apache Cassandra config:

Results:
workloada-transaction.dat:[OVERALL], Throughput(ops/sec), 66747.20829801293
workloadb-transaction.dat:[OVERALL], Throughput(ops/sec), 51192.273091312025
workloadc-transaction.dat:[OVERALL], Throughput(ops/sec), 54417.28508822448
workloadd-transaction.dat:[OVERALL], Throughput(ops/sec), 66660.44502513099
workloade-transaction.dat:[OVERALL], Throughput(ops/sec), 8650.39383899774
workloadf-transaction.dat:[OVERALL], Throughput(ops/sec), 35613.0058469268

YugaByte DB config:

  • v1.0.0 release
  • Client driver:
    • groupId: com.yugabyte
    • artifactId: cassandra-driver-core
    • version: 3.2.0-yb-12

Results:
workloada-transaction.dat:[OVERALL], Throughput(ops/sec), 72913.93239420188
workloadb-transaction.dat:[OVERALL], Throughput(ops/sec), 75755.85403362045
workloadc-transaction.dat:[OVERALL], Throughput(ops/sec), 72795.03829019015
workloadd-transaction.dat:[OVERALL], Throughput(ops/sec), 70898.85569246912
workloade-transaction.dat:[OVERALL], Throughput(ops/sec), 14612.108483944552
workloadf-transaction.dat:[OVERALL], Throughput(ops/sec), 50118.351472330716

Please let me know if you need more details or the full output log.

@busbey
Copy link
Collaborator Author

busbey commented May 29, 2018

Thanks @robertpang! that'll do nicely.

@busbey
Copy link
Collaborator Author

busbey commented May 29, 2018

I sent pings for testing to the user@ lists for:

  • Apache HBase
  • Apache Geode
  • Apache Accumulo
  • Apache Kudu
  • Apache Solr

@busbey
Copy link
Collaborator Author

busbey commented May 29, 2018

known issues for this release should include #1155

@metatype
Copy link
Contributor

@upthewaterspout and I tested the Geode datastore versions 1.2.0, 1.3.0, and 1.6.0 (latest) against the Geode driver. Everything looks good!

@busbey
Copy link
Collaborator Author

busbey commented May 30, 2018

thanks geode folks!

@fwang29
Copy link

fwang29 commented May 30, 2018

I tested 0.14.0-RC1 with Kudu (single node) on CentOS 6.6 and tweaked with some workload parameters as below:
kudu_buffer_num_ops=1000
kudu_block_size=2048
kudu_table_num_replicas=1

Nothing looks wrong!

@busbey
Copy link
Collaborator Author

busbey commented May 31, 2018

thanks @fwang29! what version of Kudu did you use?

@fwang29
Copy link

fwang29 commented May 31, 2018

@busbey NP! Sorry, I forgot to mention it. It was version 1.8.0.

@twblamer
Copy link
Contributor

twblamer commented Jun 1, 2018

I tested the mongodb binding and had no issues.

YCSB build:

  • Binding specific tarball (ycsb-mongodb-binding-0.14.0-RC2-SNAPSHOT.tar.gz)

Host details:

  • 2 Intel Xeon CPU E5-2690 v4 @ 2.60GHz (28 threads / 14 cores each)
  • 256 GB RAM
  • 1600 GB NVMe SSD
  • OS: Fedora 25
  • Java version: OpenJDK 8 (java-1.8.0-openjdk-1.8.0.151-1.b12.fc25.x86_64)

Client details:

  • YCSB client executed on same host as DB

Database setup:

  • MongoDB Community Server 3.6.4 (mongodb-org-server-3.6.4-1.el7.x86_64)
  • WiredTiger storage engine
  • Standalone database (no replication or sharding)

YCSB workload parameters:

  • recordcount=100000000
  • operationcount=100000000
  • threadcount=64

Disclaimer: this was and a "fire and forget" run with no tuning

Results for mongodb client

./load1.result.txt:         [OVERALL], Throughput(ops/sec), 188692.0621023315
./workloada.result.txt:     [OVERALL], Throughput(ops/sec), 48469.972609618475
./workloadb.result.txt:     [OVERALL], Throughput(ops/sec), 178248.0357066465
./workloadc.result.txt:     [OVERALL], Throughput(ops/sec), 250913.95407772812
./workloadf.result.txt:     [OVERALL], Throughput(ops/sec), 45878.48349590374
./workloadd.result.txt:     [OVERALL], Throughput(ops/sec), 247723.42175408002
./load2.result.txt:         [OVERALL], Throughput(ops/sec), 119994.28827187826
./workloade.result.txt:     [OVERALL], Throughput(ops/sec), 29943.45477999346

Results for mongodb-async client

./load1.result.txt:         [OVERALL], Throughput(ops/sec), 45912.81798643827
./workloada.result.txt:     [OVERALL], Throughput(ops/sec), 24325.67407050991
./workloadb.result.txt:     [OVERALL], Throughput(ops/sec), 56964.15245885764
./workloadc.result.txt:     [OVERALL], Throughput(ops/sec), 72597.38939787725
./workloadf.result.txt:     [OVERALL], Throughput(ops/sec), 24995.731978764627
./workloadd.result.txt:     [OVERALL], Throughput(ops/sec), 70591.75657703396
./load2.result.txt:         [OVERALL], Throughput(ops/sec), 43336.09962102581
./workloade.result.txt:     [OVERALL], Throughput(ops/sec), 5611.603314190471

@busbey
Copy link
Collaborator Author

busbey commented Jun 4, 2018

I'd like to wrap up this RC by the end of the week. Thanks so much to the folks who have tested things so far!

IMHO we already have enough tested datastore bindings to close things out. Since I have most of the testing for the Apache HBase related changes done, I think I'll make a go of finishing that up before closing things up.

I can test out that basics work on windows as well, unless someone else wants to take it on.

Bindings currently slated to be listed as "untested / your milage may vary", with a ping to folks who showed up on related PRs:

@haihuang-ml
Copy link
Contributor

Google Datastore

I have this continuously running in our test environment and see no problem.

@busbey
Copy link
Collaborator Author

busbey commented Jun 4, 2018

@haih-g could you have it test specifically against the 0.14.0-RC1 tag?

@isuntsov-gridgain
Copy link
Contributor

@busbey where I can get the actual list of untested bindings?

@busbey
Copy link
Collaborator Author

busbey commented Jun 7, 2018

My comment from ~3 days ago is still fairly up to date. IIRC, only Google Datastore has since been tested.

I'm probably only going to test the HBase 1.2+ bindings, which would mean HBase 0.98 and HBase 1.0 will also go into the list.

Is that what you're looking for @isuntsov-gridgain?

@ctubbsii
Copy link
Contributor

ctubbsii commented Jun 7, 2018

Although I was tagged, I won't be able to test the Accumulo bindings without detailed step-by-step instructions... I'm not a YCSB user and know next to nothing about YCSB. However, with detailed step-by-step testing instructions, I'd happily launch an Accumulo instance to test against.

@joshelser
Copy link
Contributor

@ctubbsii see the README, e.g https://github.com/brianfrankcooper/YCSB/tree/master/accumulo1.8. It's very straightforward. Just "load" and then "test" for a workload. Testing one is likely "good enough" (you don't need to do workloads A through F).

@ctubbsii
Copy link
Contributor

ctubbsii commented Jun 7, 2018

@joshelser Thanks. I didn't realize there was a README in the subdirectory with detailed instructions. Will use those. 😺

@bosher
Copy link
Contributor

bosher commented Jun 7, 2018 via email

@ctubbsii
Copy link
Contributor

ctubbsii commented Jun 7, 2018

Issues I found testing Accumulo 1.9.1:

  1. The build has an error: no version specified for apache-rat-plugin (see PR [build] Add missing version to rat plugin #1168)
  2. accumulo.1.7.version should be 1.7.4 (see PR [accumulo] Use latest versions of Accumulo #1167)
  3. accumulo.1.8.version should be 1.9.1 (see PR [accumulo] Use latest versions of Accumulo #1167) (yes, 1.9.1, not 1.8.1; 1.8.0 and 1.8.1 had critical data loss issues; 1.9.x replaces the 1.8 series)

I'm not sure what to look for in the testing output, but everything seemed to work fine if I set accumulo.1.8.version to 1.9.1 and ran load, followed by run, according to the accumulo1.8 README.

I tested using https://github.com/apache/fluo-uno using workload A on a single machine, with Apache Accumulo 1.9.1, Apache Hadoop 2.7.6, and Apache ZooKeeper 3.4.12

@busbey
Copy link
Collaborator Author

busbey commented Jun 7, 2018

are the Accumulo version numbers needed for the release or would they be fine to update for next time?

@ctubbsii
Copy link
Contributor

ctubbsii commented Jun 7, 2018

I think the main risk with releasing without the updated Accumulo version numbers is possible confusion. I would rate the risk low.

@isuntsov-gridgain
Copy link
Contributor

isuntsov-gridgain commented Jun 8, 2018

@busbey actually I'm not familiar with any distributed systems from this list so it will be the random choice.
I've started to work with Aerospike.

@isuntsov-gridgain
Copy link
Contributor

Guys,

I see in YCSB/pom.xml (0.14.0-staging):
<aerospike.version>3.1.2</aerospike.version>

It is a version that was released in 2014. With the 4.1+ I have compilation problems. Is it OK?

@busbey
Copy link
Collaborator Author

busbey commented Jun 8, 2018

I see in YCSB/pom.xml (0.14.0-staging):
<aerospike.version>3.1.2</aerospike.version>

It is a version that was released in 2014. With the 4.1+ I have compilation problems. Is it OK?

If it works with the version advertised in YCSB then it's okay for the release. if things are tested then we expressly call out what version(s) were tested for the release notes. ("advertised" here means either a) mentioned in the README, b) mentioned in prior release notes, c) version in the pom

For this sort of thing usually I'd suggest making sure there's a ticket about needing to update. Or if you want to and have time to fix it now just start with a PR. We can include an update in the next release. I'd look for feedback from whoever maintains Aerospike about wether it's worth keeping compatibility with both v3 and v4.

@rohanjayaraj
Copy link
Contributor

rohanjayaraj commented Jun 8, 2018

@busbey Tested 0.14.0-RC1 with MapR-DB and MapR-JSONDB datastores and found no issues. Below are the configs and results :

Datastore setup

  • single node cluster
  • 20 CPUs, Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz
  • 256GB RAM
  • 5 x 450GB Samsung MKZ SSD
  • OS: CentOS 7.4

YCBS client setup

  • YCSB client executed on the same node

YCSB workload parameters

  • recordcount=10000000
  • operationcount=10000000
  • threadcount=32

MapR Database Setup

  • MapR Converged Community Edition 6.0.1

Results for maprdb client

load1:       [OVERALL], Throughput(ops/sec), 121410.793419535
workloada:     [OVERALL], Throughput(ops/sec), 135895.41488870166
workloadb:     [OVERALL], Throughput(ops/sec), 81240.70809401174   
workloadc:     [OVERALL], Throughput(ops/sec), 79145.23149980213
workloadf:      [OVERALL], Throughput(ops/sec), 68762.54916522265
workloadd:     [OVERALL], Throughput(ops/sec), 83941.91219675985
load2:       [OVERALL], Throughput(ops/sec), 133481.05236461683
workloade:     [OVERALL], Throughput(ops/sec), 8598.895901766213

Results for maprjsondb client

load1:       [OVERALL], Throughput(ops/sec), 147240.70911125507
workloada:      [OVERALL], Throughput(ops/sec), 87088.29010851201
workloadb:      [OVERALL], Throughput(ops/sec), 70698.35838411831
workloadc:      [OVERALL], Throughput(ops/sec), 76678.29620825825
workloadf:      [OVERALL], Throughput(ops/sec), 37459.216278277025
workloadd:     [OVERALL], Throughput(ops/sec), 68926.5380956976
load2:       [OVERALL], Throughput(ops/sec), 162593.6946165228
workloade:     [OVERALL], Throughput(ops/sec), 9972.575417601596

@isuntsov-gridgain
Copy link
Contributor

isuntsov-gridgain commented Jun 9, 2018

I've finished with Aerospike.
Environment:

  • Ubuntu 16.04
  • CPUs: 8
  • RAM: 32
  • SSD: 400 GiB

YCBS client setup

  • YCSB client executed on the same node

Aerospike server version: 4.2.0.3

Results for Aerospike client:
load: [OVERALL], Throughput(ops/sec), 4651.162790697675
workloadb: [OVERALL], Throughput(ops/sec), 4739.336492890995
workloadc: [OVERALL], Throughput(ops/sec), 5464.48087431694
workloadf: [OVERALL], Throughput(ops/sec), 4310.3448275862065
workloadd: [OVERALL], Throughput(ops/sec), 4926.108374384236

@isuntsov-gridgain
Copy link
Contributor

isuntsov-gridgain commented Jun 9, 2018

Redis - done!
Version: 4.0.9
Environment:

  • mac os
  • RAM: 16g
  • CPU: 8

YCBS client setup

  • YCSB client executed on the same node

load
[OVERALL], Throughput(ops/sec), 427.53313381787086

run
a [OVERALL], Throughput(ops/sec), 4901.9607843137255
b [OVERALL], Throughput(ops/sec), 4975.124378109453
c [OVERALL], Throughput(ops/sec), 5128.205128205128
d [OVERALL], Throughput(ops/sec), 4739.336492890995
f [OVERALL], Throughput(ops/sec), 3968.253968253968

@busbey
Copy link
Collaborator Author

busbey commented Jun 12, 2018

in evaluating the RC I found that the big tarball doesn't have a README.md included, filed #1171. It's been broken for ~4 years so I don't think it needs to be fixed for this release. someone shout if they'd prefer otherwise.

@busbey
Copy link
Collaborator Author

busbey commented Jun 12, 2018

the hbase098 binding instructions refer to a binding that doesn't exist instead of hbase098. filed #1172. I don't think this should be a blocker.

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

tested against HBase 1.2 (CDH 5.14.2) (via hbase098, hbase10, hbase12, hbase14, and hbase20 bindings)

found #1173 [hbase20] no slf4j implementation; hbase client doesn't log. it doesn't seem like a blocker, since none of the information we miss from logging is critical and it can be worked around by adding an slf4j logging-binding to the classpath.

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

tested against HBase 2.0 (CDH 6.0.0-beta1) (via hbase10, hbase12, hbase14, and hbase20 bindings)

#1173 applies there too.

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

@petersomogyi
Copy link

@busbey: release notes is great! I have 2 minor comments:

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

Thanks @petersomogyi! I've corrected both of those now.

@jojochuang
Copy link

@busbey thanks a lot for the release note!
I spotted one typo: doesn't "prior" log --> provide

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

good catch @jojochuang! fixed now. thanks.

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

Unless anyone else has something they'd like to see changed, I plan to push the release in about an hour, FYI.

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

@busbey
Copy link
Collaborator Author

busbey commented Jun 13, 2018

PR up to point folks to the new release from our landing page: #1174 . if someone could take a look please.

@busbey
Copy link
Collaborator Author

busbey commented Jun 26, 2018

Closing this release out. I don't have the time ATM to do release announcements across the various mailing lists. If anyone else wants to pick that up let me know and I'll point you at the template.

@busbey busbey closed this as completed Jun 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests