Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore][pkg/stanza/adapter] Add end-to-end benchmark #21929

Merged

Conversation

djaglowski
Copy link
Member

@djaglowski djaglowski commented May 15, 2023

Follows #21928

Add an end-to-end benchmark where logs pass through the entire package, rather than only through the converter.

  • Logs are emitted by a fake receiver that uses the pkg/stanza framework.
  • Logs are consumed by a consumertest.LogsSink.

This is part of an effort to evaluate whether or not we are actually benefiting from non-configurable options such as workers and batching logic. (See #21889 and #21184)

@djaglowski djaglowski force-pushed the stanza-adapter-benchmark-e2e branch 5 times, most recently from 2063a23 to ea35894 Compare May 15, 2023 18:55
@djaglowski djaglowski marked this pull request as ready for review May 15, 2023 19:15
@djaglowski djaglowski requested review from a team and Aneurysm9 May 15, 2023 19:15
@djaglowski djaglowski requested a review from dmitryax May 15, 2023 19:23
@djaglowski djaglowski force-pushed the stanza-adapter-benchmark-e2e branch from ea35894 to ed585cf Compare May 15, 2023 21:46
@djaglowski
Copy link
Member Author

I've run the benchmarks with a variety of configurations. The following seem to represent the bounds between which I've seen the most benefit from tuning the variables.

BenchmarkEndToEnd/workers=1,batchSize=1,flushInterval=10ms-10         	       2	 652891104 ns/op	492808056 B/op	 9300927 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=1,flushInterval=100ms-10        	       2	 648173062 ns/op	492808872 B/op	 9300940 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=10,flushInterval=10ms-10        	       3	 398408736 ns/op	398642690 B/op	 7370597 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=10,flushInterval=100ms-10       	       3	 400397264 ns/op	398642528 B/op	 7370564 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=100,flushInterval=10ms-10       	       3	 352169361 ns/op	369336186 B/op	 6601482 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=100,flushInterval=100ms-10      	       3	 342537514 ns/op	369339032 B/op	 6601476 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=1000,flushInterval=10ms-10      	       3	 335120320 ns/op	365945658 B/op	 6511870 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=1000,flushInterval=100ms-10     	       3	 334818833 ns/op	365930714 B/op	 6511763 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=10000,flushInterval=10ms-10     	       3	 334606986 ns/op	366733512 B/op	 6501796 allocs/op
BenchmarkEndToEnd/workers=1,batchSize=10000,flushInterval=100ms-10    	       4	 345800927 ns/op	366729344 B/op	 6501781 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=1,flushInterval=10ms-10         	       2	 636481958 ns/op	492809316 B/op	 9300868 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=1,flushInterval=100ms-10        	       2	 732523458 ns/op	492809212 B/op	 9300893 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=10,flushInterval=10ms-10        	       4	 289654781 ns/op	398645030 B/op	 7370440 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=10,flushInterval=100ms-10       	       4	 295230958 ns/op	398647074 B/op	 7370456 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=100,flushInterval=10ms-10       	       5	 241720100 ns/op	369332686 B/op	 6601412 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=100,flushInterval=100ms-10      	       5	 241196725 ns/op	369331248 B/op	 6601377 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=1000,flushInterval=10ms-10      	       5	 232351683 ns/op	365932681 B/op	 6511709 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=1000,flushInterval=100ms-10     	       5	 236435042 ns/op	365924676 B/op	 6511660 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=10000,flushInterval=10ms-10     	       5	 236552258 ns/op	366726284 B/op	 6501684 allocs/op
BenchmarkEndToEnd/workers=2,batchSize=10000,flushInterval=100ms-10    	       5	 230390875 ns/op	366726432 B/op	 6501694 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=1,flushInterval=10ms-10         	       2	 505388250 ns/op	492803388 B/op	 9300730 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=1,flushInterval=100ms-10        	       2	 511462646 ns/op	492807688 B/op	 9300770 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=10,flushInterval=10ms-10        	       5	 218255208 ns/op	398640275 B/op	 7370367 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=10,flushInterval=100ms-10       	       6	 240545611 ns/op	398639908 B/op	 7370379 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=100,flushInterval=10ms-10       	       6	 180173320 ns/op	369329928 B/op	 6601348 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=100,flushInterval=100ms-10      	       6	 181779250 ns/op	369329554 B/op	 6601328 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=1000,flushInterval=10ms-10      	       6	 181659528 ns/op	365926612 B/op	 6511637 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=1000,flushInterval=100ms-10     	       7	 190581435 ns/op	365923189 B/op	 6511616 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=10000,flushInterval=10ms-10     	       6	 186290556 ns/op	366724128 B/op	 6501650 allocs/op
BenchmarkEndToEnd/workers=4,batchSize=10000,flushInterval=100ms-10    	       6	 185553042 ns/op	366724073 B/op	 6501651 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=1,flushInterval=10ms-10         	       3	 511556875 ns/op	492777816 B/op	 9300715 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=1,flushInterval=100ms-10        	       2	 514551396 ns/op	492783148 B/op	 9300729 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=10,flushInterval=10ms-10        	       5	 225344000 ns/op	398641142 B/op	 7370392 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=10,flushInterval=100ms-10       	       5	 227987925 ns/op	398641204 B/op	 7370390 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=100,flushInterval=10ms-10       	       7	 178152702 ns/op	369332974 B/op	 6601355 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=100,flushInterval=100ms-10      	       7	 166283714 ns/op	369331330 B/op	 6601331 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=1000,flushInterval=10ms-10      	       7	 174814119 ns/op	365933741 B/op	 6511651 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=1000,flushInterval=100ms-10     	       7	 159976482 ns/op	365922590 B/op	 6511598 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=10000,flushInterval=10ms-10     	       7	 177555976 ns/op	366724802 B/op	 6501649 allocs/op
BenchmarkEndToEnd/workers=8,batchSize=10000,flushInterval=100ms-10    	       6	 184725097 ns/op	366725930 B/op	 6501653 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=1,flushInterval=10ms-10        	       2	 541108229 ns/op	492770088 B/op	 9300761 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=1,flushInterval=100ms-10       	       2	 536733104 ns/op	492769496 B/op	 9300747 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=10,flushInterval=10ms-10       	       4	 250019177 ns/op	398642148 B/op	 7370447 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=10,flushInterval=100ms-10      	       5	 234535958 ns/op	398638972 B/op	 7370406 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=100,flushInterval=10ms-10      	       7	 177564667 ns/op	369337018 B/op	 6601403 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=100,flushInterval=100ms-10     	       7	 175405833 ns/op	369331048 B/op	 6601312 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=1000,flushInterval=10ms-10     	       7	 164162625 ns/op	365931966 B/op	 6511652 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=1000,flushInterval=100ms-10    	       7	 173975524 ns/op	365924126 B/op	 6511594 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=10000,flushInterval=10ms-10    	       7	 170219363 ns/op	366726422 B/op	 6501640 allocs/op
BenchmarkEndToEnd/workers=16,batchSize=10000,flushInterval=100ms-10   	       7	 172218786 ns/op	366724099 B/op	 6501623 allocs/op

w/r/t #21889 and #21184, my takeaway from this benchmark is that both the batch size and number of workers have a meaningful impact on performance. The flush interval does not, but is a necessary consequence of batching.

That said, before we add new parameters, I'd like to briefly pursue a couple of potential opportunities for simplifying the codebase, and then confirm the benchmark still indicates the same.

Copy link
Member

@dmitryax dmitryax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

pkg/stanza/adapter/benchmark_test.go Outdated Show resolved Hide resolved
@djaglowski djaglowski force-pushed the stanza-adapter-benchmark-e2e branch from ed585cf to ff0afcd Compare May 18, 2023 10:28
@djaglowski djaglowski merged commit 5ed7426 into open-telemetry:main May 18, 2023
@djaglowski djaglowski deleted the stanza-adapter-benchmark-e2e branch May 18, 2023 14:39
@github-actions github-actions bot added this to the next release milestone May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants