Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real world benchmarks #44

Closed
dzmitry-lahoda opened this issue Nov 8, 2018 · 4 comments
Closed

Real world benchmarks #44

dzmitry-lahoda opened this issue Nov 8, 2018 · 4 comments
Assignees
Labels
enhancement New feature or request infrastructure simplifies solution development
Milestone

Comments

@dzmitry-lahoda
Copy link
Contributor

dzmitry-lahoda commented Nov 8, 2018

Goals:

  1. Avoid real performance regressions

  2. Automate testing without false positives

  3. Allow to experiment with improvement against self.

  4. Provide other containers with exampele of testing and containers to live longer against pure functional composition.

  5. Allow reasonably measure implementation of https://bitbucket.org/dadhi/dryioc/issues/197

Means:

  1. Several scenarios (web site, web server, desktop, mobile, cli, networking server, database, nano services(actors) ). [2]. Try to check Java world for documented cases. Document each case and reasoning for object graph.
  2. Generate all classes via tt. Not manual coding.
  3. Choose DI of several versions and last references csproj. E.g. major or specified version changes downloaded from nuget.
  4. Run BDN to get structured output against each chosen previous version.
  5. Apply proper statistical comparisons measures to avoid false negatives because of fluctuations (need to recall CS article about that I have seen and act accordingly). Stat on {moment0, monent1, moment2} * {gc, mem, time, cpu} * {workload1, workload2, ..., workloadX}. Possible prune outliers, rerun on fail.
  6. Setup and document each assert and reasoning so easy to tune.
  7. Run tests on several machines/vms/while gaming to ensure did stats/comparisons right.
  8. Use complex features of container not available in tests which cover many containers.

Will not:

  1. Store and compares historical data.
  2. Will not test other containers.
  3. Run real code (host http or read from storage).

Links:

#27

danielpalme/IocPerformance#103

@dadhi
Copy link
Owner

dadhi commented Nov 30, 2018

Linking #45

@dadhi dadhi self-assigned this Feb 10, 2019
@dadhi dadhi added this to the 4.0.0 milestone Feb 10, 2019
@dadhi
Copy link
Owner

dadhi commented Feb 10, 2019

Wip..., search for word "realistic" in code base.

dadhi added a commit that referenced this issue Feb 18, 2019
@dadhi
Copy link
Owner

dadhi commented Feb 22, 2019

I have added the "realistic" benchmark for Unit-of-work with ~40 registrations of different types, part of them implement IDisposable so the dispose time is counted as well.

  • The SUT
  • The test for DryIoc only
  • The benchmark with MS.DI, DryIoc, DryIoc.MS.DI, Grace, Grace.MS.DI, Autofac, Autofac.MS.DI.
  • The similar benchmark as above but against DryIoc v3 and DryIoc.MS.DI 2.1

@dadhi dadhi added enhancement New feature or request infrastructure simplifies solution development labels Feb 22, 2019
dadhi added a commit that referenced this issue Jul 11, 2019
@dadhi dadhi closed this as completed Jul 15, 2019
@dadhi
Copy link
Owner

dadhi commented Jul 15, 2019

Look at #139 for LoadTest project added by @Havunen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request infrastructure simplifies solution development
Projects
None yet
Development

No branches or pull requests

2 participants