-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed Up JSON filtering on large objects (AzDO) #3702
Comments
Ok, so written some performance tests. durations are: original, using struct json, pre-stripping the jobs+struct. 3 models, parent template, demand style, base style. TL;DR; pre-stripping and structing combined gives the best performance on large queues with templates. On small queues there is no effect (sub 50 µs), Data @ 1000 length finished job queues PARENT MODEL DEMAND MODEL BASE MODEL |
Sorry for the slow response, Any way, if the changes required don't make the code unreadable, I think that every performance improvement is worth, my concern here is that if we are changing this for performing better, we should measure the performance with the scaler code directly |
are you willing to contribute with this? |
Sure, I extracted the scalar code azure_pipelines_scalar to another project and replaced the loader with a json file and ran the scalar against it. I'll commit my code to my fork so you can take a look. I actually think it becomes more readable :) |
True, it's quite cleaner :) |
BTW, thanks a ton for the in depth research 🙇 |
no worries, I'll give it a final tidy and raise a PR |
* Improve AzDO profilng speed of queues (#3702) Signed-off-by: mortx <[email protected]>
Proposal
Improve the speed of json object searching by using filtering as opposed to loop iteration. Particularly around:
https://github.com/kedacore/keda/blob/main/pkg/scalers/azure_pipelines_scaler.go#L260
Use-Case
Where queue lengths can consistantly grow, this causes keda scalers to slow down and may cause an event to happen slower than liked. This means that in this use-case agents would be more responsive to spin up times
Anything else?
No response
The text was updated successfully, but these errors were encountered: