-
Notifications
You must be signed in to change notification settings - Fork 2
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance tests #145
Comments
First of all, the work being done here should be exclusively focusing in providing performance tests for the Childchain (2). Priority wise. We can shift our focus to Watcher APIs when it's needed (and this is done). Now I'll focus exclusively on those two paragraphs that test the childchain. |
I will put my feedback of each section: LoadTest.Runner.WatcherInfoAccountApiSuggested behaviour
This test was created to target to test out performance the watcher-info account related APIs. But...as known, no assertion was done at the time. Personally I am interested with assertion with API latency (server side), and error rate (probably both client and server?) for all APIs that is targeting to test. The test sends transaction to the account itself in able to increase the load of DB to same account. So we can probably see performance deviation on having more and more data bind to same account. As your suggested behavior mentioned some deposit and transfer test, isn't that covered by your current WIP PR? LoadTest.Runner.UtxosLoadSuggested behaviour
I am confused actually. The reason of 2 parts I think is to fund the account so the later part can iterate. (If I recall it correctly)
I would like to add assertion on our childchain submit tx API. This is our core function. Same as above, server latency metric and error rate are something that is definitely worth asserting. LoadTest.Runner.StandardExitsSuggested behaviourUnlike other ones, we probably are not too interested in API performance for this one. This test was created to ensure our system can handle some loads on concurrent exit happening (we had an incident previously when some exits happens and infura has some issue, it crashes). So probably what you're planning to do is good enough? Or we can add fetch some generic system monitoring metric to ensure nothing happens. LoadTest.Runner.SmokeExpected behaviour
I was just about to agree with you. But then I see your PR implementation that would be really protective and perf run and not fail fast. The smoke test is ... not to test but to ensure the connection is set correctly on perf configs so we don't waste time run the long running ones. But I don't mind remove this and see how much it is an issue. LoadTest.Runner.ChildChainTransactionsExpected behaviour
Actually I have forgot the reasoning of having two similar ones. They are slightly different on how UTXOs are used but what was the main goal?? cc @kevsul do you still remember. |
This was introduced as there was incident that our system failing when many exit occurs. We actually don't need too much as last incident was 14 exits 😅 so I recall the conclusion of part of the release acceptance something 15 might be good enough lol
This test goes directly to childchain, no watcher in the middle I believe. It is chaining the utxos. |
Yeah, the performance tests all have different objectives:
In general the purpose of these tests is to test how the system performs under load - "the system" includes both the child chain and the watcher, but I agree that we should focus on the child chain first. |
I think there are module tests and integration tests ( Based on your comments I think I can implement a test based on
Also, as @boolafish suggested in #149 (comment) I'll add the required metrics to @InoMurko @boolafish are you ok with this? |
Yeah, that's true. |
Probably also check the feedback (slack) from infra team to see if there is other metrics to add to the tests: https://omgnetworkhq.slack.com/archives/CMX790Q5V/p1603252908142800?thread_ts=1603249243.137000&channel=CMX790Q5V&message_ts=1603252908.142800 |
Absolutelly. Stuffing this thing into containers and run them in parallel should be one of our goals, but it is also important that once an instance finishes, it asserts that what it's been doing resulted in the correct ending balance. We need to be sure that our accounting works as designed even when our stack is under high load. I would also insist, that when the tests are finished, we would be able to (for the purpose of the test - for now) export a report from the childchain, with all needed columns that assert that our accounting finished with ✔️. |
Related prs in elixir-omg: |
Done, thanks @ayrat555 |
Subtask of #59 and #141
LoadTest.Runner.WatcherInfoAccountApi
Current behaviour
It creates and funds a single account on the childchain and then measures API responses multiple times for:
No assertions are made during this test
Suggested behaviour
Split this test into two separate tests:
/account.get_balance
/account.get_utxos
/transaction.create
/transaction.create
LoadTest.Runner.UtxosLoad
Current behaviour
It has two scenarios:
Suggested behaviour
It can be separated into two tests:
As you can see in this case I'm suggesting only to add assertions.
LoadTest.Runner.StandardExits
Current behaviour
Suggested behaviour
LoadTest.Runner.Smoke
Current behaviour
Expected behaviour
I don't think it has to be ported (I don't think it's needed)
LoadTest.Runner.ChildChainTransactions
Current behaviour
Expected behaviour
It seems this test is already part of
LoadTest.Runner.UtxosLoad
.For now, I won't delete existing tests, I will just create new ones that can be run from the command line using
mix run
. You can check an example of such test in omgnetwork/elixir-omg#1745The text was updated successfully, but these errors were encountered: