Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Blazor] Use JSON source generator during WebAssembly startup #54956

Merged
merged 20 commits into from
Apr 19, 2024

Conversation

MackinnonBuck
Copy link
Member

Updates Blazor WebAssembly startup to utilize the WebAssembly source generator as much as possible.

Opening as a draft for now, as there's room for further optimization.

@dotnet-issue-labeler dotnet-issue-labeler bot added the area-blazor Includes: Blazor, Razor Components label Apr 4, 2024
@MackinnonBuck
Copy link
Member Author

Notes for reviewers

There were two approaches prototyped in this PR, and they're roughly equivalent in terms of performance:

  1. [Initial approach] Add new APIs to IJSRuntime and PersistentComponentState that accept an IJsonTypeInfoResolver argument. Use internal JsonSerializerContexts for serialization of internal data types.
  2. [Currently implemented] Add a new ConfigureComponentsJsonOptions extension method on IServiceCollection that lets developers add IJsonTypeInfoResolvers to a shared JsonSerializerOptions. Use the shared JsonSerializerOptions as much as possible internally.

Performance testing was done locally, and I tested the following scenarios using the Components.TestServer project in this repo (using the WebAssembly bits from Components.WasmMinimal):

  • Running a published build with profiling disabled (the scenario we most care about)
  • Running a published build with profiling enabled (adds significant overhead, but gives the perf insight)
  • Running an app in debug mode (affects developer experience)
Here are some before/after measurements for main thread blocking time in each of the above scenarios
  • Scenario 1 (published)
    • Performance tab:
      • Before: ~375ms
      • After: ~300ms
      • ~20% improvement
    • Lighthouse:
      • Before: 1,750ms
      • After: 1,300ms
      • ~25% improvement
  • Scenario 2 (published + profiler enabled, performance tab)
    • Before: ~2,600ms
    • After: ~1,100ms
    • ~58% improvement
      • (this number was very deceiving at first!)
  • Scenario 3: (debug build, performance tab)
    • Before: ~560ms
    • After: ~400ms
    • ~29% improvement

As stated earlier, I haven't found a statistically significant difference in perf between approaches 1 and 2, but the best measurements I achieved were using approach 1. I still plan to test these changes on larger apps soon to see if there are cases where one approach stands out over the other.

Here is a less quantitative analysis of the pros and cons of each approach:

Option 1. New IJSRuntime APIs

One important detail with how System.Text.Json works is that the metadata it discovers about types is cached in JsonSerializerOptions instances. This metadata comes in the form of JsonTypeInfo, which describes information about how a type should be serialized. Different properties of JsonSerializerOptions might alter how a JsonTypeInfo for a give type gets constructed, such as the set of registered JsonConverters, the IncludeFields property, etc.

Instances of JsonTypeInfo get constructed by IJsonTypeInfoResolvers, whose inputs are the Type to construct a JsonTypeInfo from, and the JsonSerializerOptions to configure certain aspects of the construction of the JsonTypeInfo. A JsonSerializerOptions can include one or more IJsonTypeInfoResolvers as an alternative to the default, reflection-based one. This is how the STJ source generator works - an IJsonTypeInfoResolver gets created with compile-time information about a developer-specified set of serializable types.

The JSRuntime abstract class defines its own JsonSerializerOptions that are configured to correctly map .NET values to JS and vice versa:

  • Property names use camel casing during serialization and are cases insensitive when deserializing
  • Custom converters are registered to support framework types (DotNetObjectReference<T>, IJSObjectReference, etc.)

With that context in mind, this approach involves adding a new argument of type IJsonTypeInfoResolver to each variant of IJSRuntime.InvokeAsync<T>(...). A developer can pass in their own source-generated IJsonTypeInfoResolver to reduce the amount of work it takes to create JsonTypeInfo instances for the argument and return value types. However, this comes with a serious limitation: The set of IJsonTypeInfoResolvers specified on a JsonSerializerOptions is immutable after the JsonSerializerOptions are first used. This is because mutating the IJsonTypeInfoResolver chain dynamically could invalidate cached JsonTypeInfos. Therefore, we need to create a new JsonSerializerOptions for each unique IJsonTypeInfoResolver passed to IJSRuntime.InvokeAsync(). This involves maintaining our own internal cache of JsonSerializerOptions so that we don't reinstantiate JsonTypeInfos when a IJsonTypeInfoResolver is used more than once.

We can also take a similar approach in other parts of the framework that have a public API accepting serializable types (e.g., persistent component state). Internal JSON serialization can just directly use internally-generated JsonSerializerContexts.

Here are some pros and cons of this approach:

Pros

  • Relatively straightforward to implement
  • Minimal impact on Blazor internals
  • Library authors can pass in new IJsonTypeInfoResolver instances on the fly without requiring the app developer to manually register them before the first JSInterop invocation
  • By having a separate JsonSerializerOptions for each IJsonTypeInfoResolver, we know that there aren't going to be extraneous resolvers registered by someone else (another library, etc.) that unexpectedly resolve a JsonTypeInfo for a type

Cons

  • Having a separate JsonSerializerOptions instance for each provided IJsonTypeInfoResolver means identical JsonTypeInfo instances might get created multiple times (there isn't a global cache in the form of a global JsonSerializerOptions)
    • For example: framework types like IJSObjectReference may need to have their JsonTypeInfo computed multiple times
      • We might be able to work around this by doing a trick where the last IJsonTypeInfoResolver in the chain looks up the JsonTypeInfo from a shared JsonSerializerOptions that we know is equivalent other than its TypeInfoResolverChain. That way you can effectively borrow from a shared cache. Not sure how heretical that is, though.
  • It doubles the API surface of IJSRuntime, unless we're picky about which overloads and extension methods accept a JsonSerializerOptions
    • We could get clever and create an API like IJSRuntime.WithJsonTypeInfoResolver(jsonTypeInfoResolver), that returns a wrapper IJSRuntime that you can then invoke all the existing methods on. Not sure whether that's better, though.

Option 2. A new IServiceCollection.ConfigureComponentsJsonOptions(...) extension method

This approach tries to solve the first "con" of the previous approach by having a global JsonSerializerOptions, which has the potential to be more efficient because it provides a unified cache for JsonTypeInfo instances. I was motivated to try this approach after an offline discussion with @eiriktsarpalis. This is similar to another approach already taken in another area of ASP.NET Core.

Developers can mutate the shared JsonSerializerOptions by inserting their own IJsonTypeInfoResolver:

Services.ConfigureComponentsJsonOptions(jsonOptions =>
{
    jsonOptions.SerializerOptions.TypeInfoResolverChain.Insert(0, MyJsonSerializerContext.Default);
});

Of course, developers would also be able to mutate properties of the JsonSerializerOptions other than just the TypeInfoResolverChain.

Blazor's internal implementation also uses the global JsonSerializerOptions when possible, so that the shared JsonTypeInfo cache is used to its maximum extent.

One problem I see with the approach as described is that there are lots of unrelated parts of the framework that use JSON serialization. Many of those parts have made assumptions about their JsonSerializerOptions being configured a certain way (especially JSRuntime). We can still have separate JsonSerializerOptions for those cases and just borrow the shared TypeInfoResolver chain (and that's what happens currently in the JSRuntime case), but that reduces the advantage of a globally-configurable JsonSerializerOptions.

We could work around this by not having a global "Components" JsonSerializerOptions, but instead have a way to configure the JsonSerializerOptions for different features separately:

Services.ConfigureJSInteropJsonOptions(jsonOptions => { ... });
Services.ConfigurePersistentComponentStateJsonOptions(jsonOptions => { ... });
Services.ConfigureProtectedBrowserStorageJsonOptions(jsonOptions => { ... });
// ...

The granularity of these APIs is a bit subjective though, and we can't really expand the "scope" of each of these options without it being a breaking change. Maybe we would initially restrict these APIs to scenarios that affect WebAssembly startup perf (JS interop and persistent component state), and then we stick to separate JsonSerializerOptions instances for the internal implementation.

Pros

  • Utilizes a shared JsonSerializerOptions when possible, which can be potentially more efficient than having numerous JsonSerializerOptions

Cons

  • Requires a larger refactor of Blazor's internals, and some assumptions we made previously now get broken
    • e.g., DefaultWebAssemblyJSRuntime.Instance can't be used until the WebAssemblyHost gets built, because we need to wait for the fully-configured JSON options
  • Having a single, shared TypeInfoResolverChain has some disadvantages:
    • Since existing scenarios need to keep working, we have to put a DefaultJsonTypeInfoResolver at the end of the chain, and this sometimes causes more work to happen than is necessary
      • For example, this may result in resolving a JsonTypeInfo for all the interfaces that int implements. Presumably, this happens because JSRuntime APIs box all the arguments as objects. I've implemented a hack that works around this, but there might be unexpected implications of doing that
      • Whereas, in approach 1, we could make it so that if the developer provides their own IJsonTypeInfoResolver, it's expected to handle all argument types and the return value. If it doesn't, that's an error.
    • For WebAssembly startup time, the time it takes to initially resolve a JsonTypeInfo matters. A longer TypeInfoResolverChain does appear to have a slight but measurable impact on that time.
    • Libraries wanting to utilize the STJ source generator would require some gesture from the user to register their IJsonTypeInfoResolvers eagerly, which can't really be done in a non-breaking way unless we added another feature that automatically discovers those
    • Libraries can't anticipate what other resolvers are in the chain, and this might cause their own IJsonTypeInfoResolver to not get invoked
  • There are opportunities for the customer to configure the shared JsonSerializerOptions in ways that the framework doesn't expect

Conclusion

I would propose we do one of the following:

  • Approach 1, as described
  • Approach 2, but limited so that we only expose APIs to configure the JsonSerializerOptions for JS interop and persistent component state. We also don't make an attempt to change Blazor's internals to all share a single JsonSerializerOptions
    • This could be an acceptable middle ground between approaches 1 and 2 in that it uses a shared JsonSerializerOptions on a per-feature basis rather than a per-app basis

Feedback would be much appreciated!

@SteveSandersonMS
Copy link
Member

SteveSandersonMS commented Apr 16, 2024

Regarding the DI-container-level ConfigureComponentsJsonOptions: if this means what I think, then that's something we've strived to avoid since the beginning because it's ecosystem-breaking. Random 3rd-party packages may try to set their own serialization mechanisms for shared types in this common location, and will stomp on each other, so adding any new DI service might randomly break other services you're already using. And the breakages will be very subtle/internal/unfixable, like getting "cannot read property of null" errors coming back from internal JS interop calls inside 3rd-party components.

So we've said for a while that the best we could do is something like:

  • Add overloads to IJSRuntime.InvokeAsync etc that accept JsonSerializerOptions, i.e., your option 1
  • Or maybe have something like jsRuntime.CreateRuntimeWithOptions(options) that returns a new IJSRuntime that defaults to your preferred options but without affecting other IJSRuntime instances. Then people could register that as a keyed DI service if they want, without breaking others, i.e., something you also mentioned in your proposal for option 1.

If I'm understanding your options correctly, this means I'd be keener on option 1.

@eiriktsarpalis
Copy link
Member

With that context in mind, this approach involves adding a new argument of type IJsonTypeInfoResolver to each variant of IJSRuntime.InvokeAsync<T>(...).

Why not just have the method accept JsonSerializerOptions as an argument? This removes any caching concerns on the implementation side since a JSO instance should encapsulate all you need to drive serialization. This is a common pattern followed in other components as well, notably System.Net.Http.Json.

If the concern is preventing the user from passing a JSO that doesn't specify camel case conversion you might want to consider either adding validation that throws if the options contains unsupported configuration or just accept as-is assuming the user knows what they are doing.


[JsonSerializable(typeof(ComponentParameter[]))]
[JsonSerializable(typeof(JsonElement))]
[JsonSerializable(typeof(IList<object>))]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this going to work when we have to deserialize the concrete parameters?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'd need to additionally register the types you expect object to be. Given that @MackinnonBuck is planning on adding the reflection-based resolver as a fallback it would work even if the list isn't exhaustive, but making sure the 80% is handled by the source generator should contribute to improved startup times.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this JsonSerializerContext is only used during deserialization, we happen to know that the object in IList<object> will either be a JsonElement or null. Why we didn't initially choose to deserialize a JsonElement[] directly, I'm not sure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not call Deserialize<IList<JsonElement>> (or Deserialize<IList<JsonElement?>> if null is valid) and register [JsonSerializable(typeof(IList<JsonElement?>))] here?

@javiercn
Copy link
Member

With that context in mind, this approach involves adding a new argument of type IJsonTypeInfoResolver to each variant of IJSRuntime.InvokeAsync<T>(...).

Why not just have the method accept JsonSerializerOptions as an argument? This removes any caching concerns on the implementation side since a JSO instance should encapsulate all you need to drive serialization. This is a common pattern followed in other components as well, notably System.Net.Http.Json.

If the concern is preventing the user from passing a JSO that doesn't specify camel case conversion you might want to consider either adding validation that throws if the options contains unsupported configuration or just accept as-is assuming the user knows what they are doing.

This likely requires us injecting our own "internal" converters into the options the user provides, isn't it? As if they pass their own options, those will only contain the converters in their SG context, and will fail to deserialize framework specific types like DotNetObjectReference and so on.

@javiercn
Copy link
Member

Regarding the DI-container-level ConfigureComponentsJsonOptions: if this means what I think, then that's something we've strived to avoid since the beginning because it's ecosystem-breaking. Random 3rd-party packages may try to set their own serialization mechanisms for shared types in this common location, and will stomp on each other, so adding any new DI service might randomly break other services you're already using. And the breakages will be very subtle/internal/unfixable, like getting "cannot read property of null" errors coming back from internal JS interop calls inside 3rd-party components.

So we've said for a while that the best we could do is something like:

  • Add overloads to IJSRuntime.InvokeAsync etc that accept JsonSerializerOptions, i.e., your option 1
  • Or maybe have something like jsRuntime.CreateRuntimeWithOptions(options) that returns a new IJSRuntime that defaults to your preferred options but without affecting other IJSRuntime instances. Then people could register that as a keyed DI service if they want, without breaking others, i.e., something you also mentioned in your proposal for option 1.

If I'm understanding your options correctly, this means I'd be keener on option 1.

Steve has beautifully raised up the concerns that I was going to raise. I don't like the idea of a shared value because it makes it hard for libraries to cooperate (easy to stomp on each other) and we don't have a defined mechanism to support collaboration.

I am also equally worried about libraries/code breaking in unexpected ways that are hard to diagnose and troubleshoot.

I would say that passing options to the relevant overloads (with a way for having configured those options to include our types before anything else on the resolver chain) seems the most desirable approach, and to only add those overloads in locations where we accept user defined types.

Keep in mind that a lot of the code internal in blazor must be in sync with the relevant JS, and that will cause issues. The other thing that gives me pause is that this is going to introduce a new class of bugs.

For example, if a type is configured to serialize an object graph, there's no equivalent functionality on the JS side, and it's not going to be obvious why that isn't working (people will claim is a bug on our side).

If our main goal is to reduce startup time (as opposed to adding a new extensibility point) how much will it cost to use the source generator internally, instead of exposed publicly. (How much gain will that get us).

I'm concerned about enabling all the JSON serializer options because we don't have the JS side equivalent to support its features, and if we allow passing the options we can't restrict which ones customers will use.

@eiriktsarpalis
Copy link
Member

eiriktsarpalis commented Apr 16, 2024

This likely requires us injecting our own "internal" converters into the options the user provides, isn't it?

If it's always the case that the serialized value encapsulates internal values, then yes. If you're just looking to serialize a user-defined TValue it's not essential. Perhaps it could be possible to configure user-provided options with your internal converters, or expose JSO factories returning the built-in converters that users can modify to their liking.

@MackinnonBuck
Copy link
Member Author

Thanks for the feedback, everyone - it sounds like a variation of option 1 is what we're leaning toward, but one that accepts a JsonSerializerOptions directly rather than accepting an IJsonTypeInfoResolver and constructing a new JsonSerializerOptions under the hood.

We might even be able to directly expose a method on IJSRuntime that returns a copy of JsonSerializerOptions that developers can mutate however they want. So the pattern would be something like:

var jsonOptions = jsRuntime.CopyJsonSerializerOptions();
jsonOptions.TypeInfoResolver = MyJsonSerializerContext.Default;

// Could do this...
await jsRuntime.InvokeVoidAsync(jsonOptions, "myJSFunction", arg1, arg2, ...);

// or...
var myJSRuntime = jsRuntime.WithJsonSerializerOptions(jsonOptions);
myJSRuntime.InvokeVoidAsync("myJSFunction", arg1, arg2, ...);

@MackinnonBuck
Copy link
Member Author

MackinnonBuck commented Apr 17, 2024

Marking as ready for review, but I will add tests soon to verify these changes.

I took the following approach, based on feedback:

  • Add an IJSRuntime.CloneJsonSerializerOptions() API so that developers can create their own JsonSerializerOptions based on the JSRuntime's defaults.
  • Add overloads of IJSRuntime.InvokeAsync<T>(...) that accept a JsonSerializerOptions.
  • Don't add any new APIs to PersistentComponentState, but instead pass in already-serialized byte arrays if a JsonSerializerContext should be used
    • This approach wasn't as feasible with JS interop because it wouldn't support framework types like DotNetObjectReference<T>.

I'll send an API review email after this PR gets approved.

@MackinnonBuck MackinnonBuck marked this pull request as ready for review April 17, 2024 00:31
@@ -7,6 +7,8 @@
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<RootNamespace>Microsoft.AspNetCore.Components</RootNamespace>
<Nullable>enable</Nullable>
<!-- SYSLIB0020: JsonSerializerOptions.IgnoreNullValues is obsolete -->
<NoWarn>$(NoWarn);SYSLIB0020</NoWarn>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this warning happen? I can't see any usage of IgnoreNullValues in this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This warning was coming from generated code, but it looks like that's not happening anymore for some reason. I've just removed this.

[JsonSerializable(typeof(int))]
[JsonSerializable(typeof(Dictionary<string, JSComponentConfigurationStore.JSComponentParameter[]>))]
[JsonSerializable(typeof(Dictionary<string, List<string>>))]
internal sealed partial class WebRendererSerializerContext : JsonSerializerContext;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about how you think we should maintain this in the future. How would a developer know when/what to add to this list of types? Am I right to think that if they start serializing something different and fail to add it here, everything would still work but would just be slower? Or would some error occur so they know to change this code?

Copy link
Member

@SteveSandersonMS SteveSandersonMS Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along the same lines, how did you even know to include these specific types? I'm hoping it's because if you don't, there's a clear error indicating they need to be included!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would a developer know when/what to add to this list of types?

These types match the types of the arguments passed to the JS interop call. Failing to do this results in a runtime error. However, I've just pushed a change that makes it even clearer where these types come from, so hopefully that eliminates any confusion. If we end up reverting that change, I can add a comment explaining how these types should be specified.


// Required for DefaultWebAssemblyJSRuntime
[JsonSerializable(typeof(RootComponentOperationBatch))]
internal sealed partial class WebAssemblyJsonSerializerContext : JsonSerializerContext;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same maintainability question as for WebRendererSerializerContext

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/// </param>
/// <param name="args">JSON-serializable arguments.</param>
/// <returns>An instance of <typeparamref name="TValue"/> obtained by JSON-deserializing the return value.</returns>
ValueTask<TValue> InvokeAsync<[DynamicallyAccessedMembers(JsonSerialized)] TValue>(string identifier, JsonSerializerOptions options, CancellationToken cancellationToken, object?[]? args)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we doing anything to warn people if they try to use a JsonSerializerOptions that isn't configured with the necessary converters for Blazor's internal functionality, e.g., ElementReference, JSObjectReference, etc? It won't be obvious that you can't just create your own JsonSerializerOptions from scratch and use it here.

If I spot this elsewhere in the code I'll come back and remove this question!


namespace Microsoft.JSInterop;

internal sealed class JSInteropTask<TResult> : IJSInteropTask
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this means we're doing a few extra allocations per JS interop call, right?

Not suggesting that's a flaw in the design, just want to check my assumptions. If this is a requirement to work properly with AOT and gives people to improve the perf further using source-generated serializers, that would outweigh a few allocations.

@SteveSandersonMS
Copy link
Member

SteveSandersonMS commented Apr 18, 2024

For example, if your argument type has a DotNetObjectReference property, then code will get generated to produce a JsonTypeInfo for both DotNetObjectReference<> and SomeType

Yeah that does seem problematic. The whole point of using DotNetObjectReference<T> is often because the value you're communicating can't be serialized in any sensible way, perhaps because it's a really deep object graph, maybe with references to things like DbSet<...>. Including a serializer for T in this case seems like exactly what you don't want to do.

So my preference at the moment would be to keep the JsonSerializerOptions overload, but of course I'm open to having my mind changed :)

On the grounds that we're not trying to solve all possible problems here, and that people have been asking for a variant that accepts a JsonSerializerOptions for years (e.g., to change null/undefined handling rules), I'd be totally happy to stick with the approach you have in this PR. We would retain the option to add overloads with JsonTypeInfo<T> in the future.

@SteveSandersonMS
Copy link
Member

From my point of view this looks good. I posted a couple of questions but expect they are not hard to resolve.

I would mark it as "approved" now but since there are still active lines of questioning from @eiriktsarpalis and @halter73 I'll let you complete those discussions first in case it leads to any broader design changes.

Thanks for being so thorough and careful in the design here. It's always delicate changing something so intrinsic to performance when there are complex back-compat and ecosystem concerns. I think you've made good decisions to reach this point.

@eiriktsarpalis
Copy link
Member

Have we verified going from object -> byte[] -> string -> byte[] -> object is faster than...

With profiling enabled, it's about 60% faster.

I doubt that this change would contribute to any speedups, the transcoding implementation is shared between reflection and source gen. If anything, this should be contributing to slow-downs since you're now allocating an intermediate byte[] whereas the string transcoder uses pooled buffers. The 60% improvement is most likely attributable to serialization using fast-path mode.

private DefaultWebAssemblyJSRuntime()
{
ElementReferenceContext = new WebElementReferenceContext(this);
JsonSerializerOptions.Converters.Add(new ElementReferenceJsonConverter(ElementReferenceContext));
JsonSerializerOptions.MakeReadOnly(populateMissingResolver: JsonSerializer.IsReflectionEnabledByDefault);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we passing JsonSerializer.IsReflectionEnabledByDefault);.

We should always have reflection enabled by default. That's not something the framework will work without, so its best if it can't be configured.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This property reflects the value of the JsonSerializerIsReflectionEnabledByDefault feature switch/property (which is controlled among other things by the value of PublishTrimmed).

I think JsonSerializerOptions.MakeReadOnly(populateMissingResolver: true); should suffice here.

Comment on lines 40 to 42
ValueTask<TValue> InvokeAsync<[DynamicallyAccessedMembers(JsonSerialized)] TValue>(string identifier, JsonSerializerOptions options, object?[]? args)
=> throw new InvalidOperationException($"Supplying a custom {nameof(JsonSerializerOptions)} is not supported by the current JS runtime");

Copy link
Member

@javiercn javiercn Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was my understanding that we don't want to do this at all.

Is this required to improve the perf startup? How much improvement do we lose if we don't have it?

Adding support for passing custom JsonSerializer options to JSInvoke method calls broadens the scope of this feature quite a lot and opens us up to a lot of cases where things can go wrong in subtle and hard to debug ways.

For that reason, I would prefer if we stuck to making whatever functionality we need internal and maintain our current policy that this part of the system is not configurable, and that if you want to perform custom serialization you can perform the serialization yourself and pass in a byte array.

I also don't want us to live in a world where this is supported in some platforms and not in others. Every time we introduce something like that, its a headache for customers.

Copy link
Member

@SteveSandersonMS SteveSandersonMS Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My interpretation was that adding this to IJSRuntime is necessary for the layering to continue to make sense. I was assuming it's not possible to restrict this to DefaultWebAssemblyJSRuntime since the relevant calls come from other layers, but if that's incorrect and it can in fact be restricted that way it's worth considering.

I also don't want us to live in a world where this is supported in some platforms and not in others. Every time we introduce something like that, its a headache for customers.

This wouldn't be an issue in practice. This PR adds the support to the underlying JSRuntime base class which we inherit in all the supported platforms. It would impact any 3rd-party IJSRuntime implementations, but that's realistically only in tests.

  • In unit tests, the new calls would not be exercised unless people start using the new APIs, so it's not an issue.
  • In integration tests like bUnit, I'm unsure whether the new calls would be exercised, but I think bUnit would want to add support for this anyway

Copy link
Member

@javiercn javiercn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After taking another look, here is my high-level feedback.

  • Unless it is strictly necessary to improve startup performance, we should avoid adding public APIs to JS interop that take in custom JSON options.
  • The JS interop APIs have been "mostly if not totally agnostic" of System.Text.Json and that's an invariant we shouldn't break.
  • I don't want us to have to deal with all the things that can go wrong when people use their own JS interop options. There is another side of the coin every time we perform JS interop, which is the JavaScript side, which does not offer any customization.

Using the APIs ourselves internally is reasonable, exposing them to customers is a much bigger change that drastically increases the impact of this change, and I'm concerned of all the things that can go wrong that we don't know about (and won't be able to take back).

So, I think we should consider scoping this down even further to rely on the SG context only for framework and internal calls unless strictly necessary otherwise, and in that case, I believe we need to include an explicit call-out for the set of features that will be supported and those which won't. Otherwise, it'll be confusing when people configure types to serialize graphs etc. and don't get expected results.

A way to phrase this could be "JSinterop only supports serializing and deserializing types using default serialization settings from System.Text.Json".

Finally, if we were to do this, rather than pass the settings via the calls, could we do this via DI by exposing a generic type parameter? JSRuntime<TOptions> where TOptions : ... and then have either a helper method on DI to register a set of options that resolves an instance configured with a particular set of options.

That way we avoid having the options on the main method APIs and we still allow libraries and other parts of the system to isolate their usage of the options from the system.

(We would register something like services.AddScoped(typeof(IJSRuntime<>,WebAssemblyJSRuntime<> and then services.RegisterJSInterop<MyOptions>() would add MyOptions to DI)

@MackinnonBuck
Copy link
Member Author

MackinnonBuck commented Apr 18, 2024

I appreciate all the thoughtful feedback that everyone's put into this!

It's clear that there's still a lot of uncertainty around what the final API should look like, so I've reduced this PR down to a much more minimal one that specifically targets WebAssembly. I also removed the DefaultAntiforgeryStateProvider optimization, since the benefit was only a few milliseconds and likely not worth the added complexity.

The only new API is now an IInternalWebJSInProcessRuntime interface that lets WebRenderer bypass the JSRuntime's JSON serialization. While it's public, it's only meant for internal framework use.

We could always revisit adding new JS interop APIs in the future, and now we have the context of this thread to pull from when landing on the final design.

It wasn't clear that this led to a real perf benefit in optimized builds
// JsonTypeInfo for all the types referenced by both DotNetObjectReference<T> and its
// generic type argument.

var newJsonOptions = new JsonSerializerOptions(jsonOptions);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had another commit that eliminated this extra JsonSerializerOptions and serialized each argument individually, passing a JsonTypeInfo for the argument directly and avoiding serializing an object[]. There appeared to be a perf benefit when running in the profiler, but it seemed to actually slightly degrade perf on optimized builds (but it was hard to tell), so I reverted it to just go with the simpler approach.

@MackinnonBuck MackinnonBuck requested review from wtgodbe and a team as code owners April 19, 2024 00:20
newJsonOptions.TypeInfoResolverChain.Add(WebRendererSerializerContext.Default);
newJsonOptions.TypeInfoResolverChain.Add(JsonConverterFactoryTypeInfoResolver<DotNetObjectReference<WebRendererInteropMethods>>.Instance);
var argsJson = JsonSerializer.Serialize(args, newJsonOptions);
inProcessRuntime.InvokeJS(JSMethodIdentifier, argsJson, JSCallResultType.JSVoidResult, 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to avoid using the SG for this call, how big is the performance penalty?

This is a single call per app, and we are creating a bunch of interfaces to support it. Might not be worth the extra infrastructure just for this.

Alternatively, the renderer could expose a virtual AttachWebRenderer interop method that WebAssembly could override. If that were to be the case then we only expose a single method, not an extra interface. The differences would be that the method is likely less accessible and doesn't sit on the JSRuntime.

Either way it's not a big deal IMO. If there is not a big penalty for avoiding this call through regular means, I would do that. If we benefit significantly, I will keep it and probably switch to use the virtual method instead of the extra interface, but its not a big deal eitherway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to avoid using the SG for this call, how big is the performance penalty?

This is a single call per app

Assuming that the goal of this change is to improve startup costs, it shouldn't matter how many times a particular call is made.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's about the trade-off of having an extra interface and coupling vs saving a couple MS.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Saving a couple of MS sounds like a desirable goal assuming that it's a cost incurred by default for every wasm app.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That couple-of-ms happens client-side as part of a several-hundred-ms startup process. It doesn't impact server CPU usage and hence mostly matters only to the extent that humans can notice it. So realistically it wouldn't be top of the queue of things for us to optimize.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we were to avoid using the SG for this call, how big is the performance penalty?

Locally, I just measured about an 80ms difference on published builds. Not sure exactly how much that difference changes on slower machines, but it seems significant. To put that number into perspective, total startup blocking time with that optimization is (locally) ~330ms.

Alternatively, the renderer could expose a virtual AttachWebRenderer interop method that WebAssembly could override.

We could, it would just mean that if we wanted to do a similar optimization in another area of the framework, we'd have to add additional specialized API (I count 9 other calls to InvokeVoidAsync in Components.Web, but those just happen to not occur during startup).

Is it fine if we proceed with this for now and discuss it further in API review? I think it's an early enough preview where we could change the API later if needed, especially since it's annotated as [EditorBrowsable(Never)].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I leave it up to you.

Copy link
Member

@javiercn javiercn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, I have a minimal remaining comment but it's pretty much ready to go.

Copy link
Member

@SteveSandersonMS SteveSandersonMS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent - thanks for being so flexible and yet precise about all this.

@MackinnonBuck MackinnonBuck merged commit c9af79a into main Apr 19, 2024
26 checks passed
@MackinnonBuck MackinnonBuck deleted the mbuck/improve-wasm-perf branch April 19, 2024 16:34
@dotnet-policy-service dotnet-policy-service bot added this to the 9.0-preview4 milestone Apr 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-blazor Includes: Blazor, Razor Components
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants