-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance app API / UI #141
Comments
@InoMurko @unnawut @boolafish does it look good? |
afaik, the perf tests have more then one test, so how do you select a test? is there a test running already? as this is a deployed application, someone could be using it already (or it could be still running a previous test. how do you plan to check the status of the test? progress of a test? |
a name of the test should be passed
it will return 201 status and id of the test on success and 422 or 500 on error
I didn't think about that. I can check if a test with the same parameters is already running or not.
I was planning to periodically save the state of the test to Postgres record |
Okay, as you can see, your intial post does not have all the information. We need a full API spec for the flows we want to support. GET: List all tests
GET: Test status
Now it's your assignment to figure out which flows we need to support, that's why I said it would be useful to talk to @boolafish and @unnawut. I don't mean they should give you a full spec! But how the current pipeline looks like, if they have any special requests. I think more or less it's up to you how you design this. But it's up to us to review that based on what you think we should do. Once you have these APIs and flows, it's easier to imagine how the frontend should look like, and what backend you need to build. |
let's have a discussion call for this. I would like to sync up as well actually from my side |
Few topics from my side:
|
I think I've put all the requirements in the parent issue. |
ah...okay, add a comment there to link back |
- Test modules Currently, I'm using the def run(session) do
tps = config(session, [:run_config, :tps])
period_in_seconds = config(session, [:run_config, :period_in_seconds])
total_number_of_transactions = tps * period_in_seconds
period_in_mseconds = period_in_seconds * 1_000
session
|> cc_spread(
:create_deposit,
total_number_of_transactions,
period_in_mseconds
)
|> await_all(:create_deposit)
end I took a look at the implementation. Test module will accept the following params:
Another idea is instead of test run params we can just hardcode test environments in the application, for example, circle ci - geth url, contracts, tps, staging - .... and only pass the environment to the test. - Monitoring The status of the run will be persistent in 2 places:
- UI Currently, I have in mind five pages:
I think it may be possible to use liveview to load new test runs as they appear in the db but it's just UX issue and I don't think it has a high priority
It will show all the fields from the postgres table. Monitoring process will be broadcasting an event when dumping test data to the DB so liveview can update this page in real-time.
The form will contain:
- API If I understand correctly, API will be used only to trigger test runs from circle ci, so I suggest adding only two endpoints:
parameters: test configuration: map with test specific params result: status 201 status 500 status 422 On any period of time, only one test with the same test key can be running, all the tests will be cancelled. On test finish, it can send a message to slack about the test.
returns - Security I think JWT tokens can be generated for every API user of the application. About UI pages, I think hardcoded login/pasword will suffice? |
From the description, it seems the approach there and the current approach implemented for childchain transaction test is not too much different. One use concurrent session and one use concurrent task with idle to achieve the targeting TPS. Session, of course, have higher overhead. But I believe our current approach does not need to spawn that many tasks/processes since it only need one for one concurrent session, and then it relies on iteration/looping. Though I am not sure if there is (too much of) performance indication or not. But at the same time, another concern I have is on hardware with multiple test triggers at the same time. If multiple tests are running under same hardware, they will compete for each other's resources. |
@boolafish I created a small library that uses a reasonable number of processes https://github.com/ayrat555/hornet. Each process periodically executes the given function and the number of processes is increased only if the required rate can not be achieved. |
What is the purpose of hornet long term? Is it to replace chaperon? |
I think chaperon can not be used for long-running performance tests because it spawns too many processes. I wasn't planning to replace chaperon with it. I'm only planning to use hornet for the performance app issue which needs a reasonable amount of processes because there will multiple simultaneous test runs I think. I will leave the current tests as they are. Or should they be re-written? |
It's up to you. I just thought that perhaps doing a perf framework is a bit too much (not sure how it integrates with the current tests?) and that chaperon could be adjusted. |
@ayrat555 Do you mind to give a brief introduction of what magic your lib is doing on scheduling the workers?? just curious, have we verified that:
|
I was experimenting today with it today. If the number is too big (6000-10000tps over 24h), it just fails. for other cases memory usage increases over time
My library starts a fixed number processes which execute the function periodically. Over time it adjusts the number of processes to keep up with the given rate. For example, you want to execute a function with rate 5 operation per second. You can do it by setting params: start_period: 200, tps: 5, func: func |
After some discussion in the slack https://omgnetworkhq.slack.com/archives/CV8DJPZ9V/p1602144266169700. My thoughts:
|
Part of #59
WIP PR omgnetwork/elixir-omg#1745 (adds deposit tests)
This issue is open for discussions.
API I have in mind :
run_params
and test specificchain_params
. ExampleAlso, I'm planning to add UI which will include:
The text was updated successfully, but these errors were encountered: