Skip to content
This repository has been archived by the owner on Nov 20, 2018. It is now read-only.

[Perf] High Allocator DefaultHttpContext #402

Closed
benaadams opened this issue Sep 14, 2015 · 15 comments
Closed

[Perf] High Allocator DefaultHttpContext #402

benaadams opened this issue Sep 14, 2015 · 15 comments
Assignees
Milestone

Comments

@benaadams
Copy link
Contributor

For 2000 plaintext requests DefaultHttpContext allocates 12,078 objects:

image

Clocking in at 927,624 bytes:

image

Which would mean if you were performing 1M rps the GC would have to flush 6,039,000 objects and 464 Mbytes per second, 927 Mbytes per second for 2M rps and 3.2GBytes (42.3M objects) per second for 7M rps (~10GbE saturation) or 192 GBytes per minute.

@rynowak
Copy link
Member

rynowak commented Sep 14, 2015

@benaadams - images aren't loading. Hmm looks like it's github's fault.

@benaadams
Copy link
Contributor Author

@rynowak better? (Have a PR for this)

@rynowak
Copy link
Member

rynowak commented Sep 14, 2015

yeah images are showing up now.

@benaadams
Copy link
Contributor Author

Have PR that reduces this to 136 objects

allocs

And 20434 bytes allocated total for the lifetime of the app rather than increasing per request and needing to be GC'd.

allocs

@benaadams
Copy link
Contributor Author

Using the Test aspnet site: https://github.com/benaadams/IllyriadGames.Server.Web.River.Test on the server https://www.nuget.org/packages/IllyriadGames.Server.Web.River (x64 coreclr beta8)

Before this change and aspnet/DependencyInjection#289 running a 10 second test allows 311,043 rps:

10 sec run

However the GC load is very high so running the test for 3 minutes causes the network to periodically flatline as it goes 100% on GC; which means it can only do 61,468 rps with very high latencies:

3 min run

With these two (three PRs) changes running the same server for 3 minutes is very consistent network performance and is performing 1,490,975 rps with much much lower latency:

3 min after

For reference the raw RIO server on this hardware+network is performing 1,991,008 rps

3 min rio

And this is kinda the addition of the aspnet to that server; still not parsing headers 😄

So with these changes the RIO server goes from 1.99M rps -> sustained 1.49M rps by adding the aspnet layer, which is quite good; and there is still scope to improve further.

Without them its 1.99M rps -> sustained 61.5k rps, max 311k rps which isn't so good.

Kestrel should equally expect a performance bump, though haven't measured how much yet.

@benaadams
Copy link
Contributor Author

@davidfowl ok if I put most findings in this Issue or do you want me to split with aspnet/DependencyInjection#288

@benaadams
Copy link
Contributor Author

Some background, pre-changes it starts fastish but you can see is on 100% cpu and the network performance is deteriorating over time:

image

Eventually it completely flatlines on network even though CPU is still 100%

image

It then recovers; though again is following same pattern of deterioration:

image

And the results are effected heavily

image

@benaadams
Copy link
Contributor Author

Post change, is still some saw toothing, but is very consistent and a much higher throughput

image

As is shown in results:

image

@benaadams
Copy link
Contributor Author

Top numbers were 2000 requests not 2400; updated

@benaadams
Copy link
Contributor Author

@davidfowl wanted some more details so breakdown for bytes for first level as follows (will break them down further):

hc-0

@benaadams
Copy link
Contributor Author

contextFactory.CreateHttpContext(features)

hc1

hc2

@benaadams
Copy link
Contributor Author

Not addressed in this PR (or DI one) but for completeness:

httpContext.ApplicationServices = _applicationServices;

hc-3
one path
hc-4
hc-5
other path (constructor at end of line)
hc-6

@benaadams
Copy link
Contributor Author

The GC pauses are pretty extreme so will wait for this hotfix rollout before testing further (though that should already be in use for dnx beta8+ on coreclr?): http://blogs.msdn.com/b/maoni/archive/2015/08/12/gen2-free-list-changes-in-clr-4-6-gc.aspx

@benaadams
Copy link
Contributor Author

Update! 1.0.0-rc1-15996

Previously for 2000 requests

hc-0

New for 2003 requests

Drill down on Issue aspnet/KestrelHttpServer#288
Measurements post PR aspnet/KestrelHttpServer#287

hc-new

So is only 61% of before and down 1.5MB per 2000 requests.

However CreateContext is still Item 4 on highest allocators in plaintext test

most-memory

@benaadams benaadams changed the title Reduce DefaultHttpContext Allocations [Perf] High Allocator DefaultHttpContext Oct 25, 2015
@Tratcher Tratcher added this to the 1.0.0-rc2 milestone Dec 16, 2015
@Tratcher Tratcher added the bug label Dec 16, 2015
@Tratcher Tratcher self-assigned this Dec 16, 2015
@davidfowl
Copy link
Member

It would be good to do an analysis after the changes

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants