-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Blazor] Use JSON source generator during WebAssembly startup #54956
Conversation
src/JSInterop/Microsoft.JSInterop/src/Infrastructure/TaskGenericsUtil.cs
Outdated
Show resolved
Hide resolved
Notes for reviewersThere were two approaches prototyped in this PR, and they're roughly equivalent in terms of performance:
Performance testing was done locally, and I tested the following scenarios using the
Here are some before/after measurements for main thread blocking time in each of the above scenarios
As stated earlier, I haven't found a statistically significant difference in perf between approaches 1 and 2, but the best measurements I achieved were using approach 1. I still plan to test these changes on larger apps soon to see if there are cases where one approach stands out over the other. Here is a less quantitative analysis of the pros and cons of each approach: Option 1. New
|
Regarding the DI-container-level So we've said for a while that the best we could do is something like:
If I'm understanding your options correctly, this means I'd be keener on option 1. |
Why not just have the method accept If the concern is preventing the user from passing a JSO that doesn't specify camel case conversion you might want to consider either adding validation that throws if the options contains unsupported configuration or just accept as-is assuming the user knows what they are doing. |
src/Components/Web/src/Infrastructure/JsonOptionsServiceCollectionExtensions.cs
Outdated
Show resolved
Hide resolved
src/Components/WebAssembly/WebAssembly/src/Hosting/WebAssemblyHostBuilder.cs
Outdated
Show resolved
Hide resolved
|
||
[JsonSerializable(typeof(ComponentParameter[]))] | ||
[JsonSerializable(typeof(JsonElement))] | ||
[JsonSerializable(typeof(IList<object>))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to work when we have to deserialize the concrete parameters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'd need to additionally register the types you expect object
to be. Given that @MackinnonBuck is planning on adding the reflection-based resolver as a fallback it would work even if the list isn't exhaustive, but making sure the 80% is handled by the source generator should contribute to improved startup times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this JsonSerializerContext
is only used during deserialization, we happen to know that the object
in IList<object>
will either be a JsonElement
or null
. Why we didn't initially choose to deserialize a JsonElement[]
directly, I'm not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not call Deserialize<IList<JsonElement>>
(or Deserialize<IList<JsonElement?>>
if null is valid) and register [JsonSerializable(typeof(IList<JsonElement?>))]
here?
This likely requires us injecting our own "internal" converters into the options the user provides, isn't it? As if they pass their own options, those will only contain the converters in their SG context, and will fail to deserialize framework specific types like |
Steve has beautifully raised up the concerns that I was going to raise. I don't like the idea of a shared value because it makes it hard for libraries to cooperate (easy to stomp on each other) and we don't have a defined mechanism to support collaboration. I am also equally worried about libraries/code breaking in unexpected ways that are hard to diagnose and troubleshoot. I would say that passing options to the relevant overloads (with a way for having configured those options to include our types before anything else on the resolver chain) seems the most desirable approach, and to only add those overloads in locations where we accept user defined types. Keep in mind that a lot of the code internal in blazor must be in sync with the relevant JS, and that will cause issues. The other thing that gives me pause is that this is going to introduce a new class of bugs. For example, if a type is configured to serialize an object graph, there's no equivalent functionality on the JS side, and it's not going to be obvious why that isn't working (people will claim is a bug on our side). If our main goal is to reduce startup time (as opposed to adding a new extensibility point) how much will it cost to use the source generator internally, instead of exposed publicly. (How much gain will that get us). I'm concerned about enabling all the JSON serializer options because we don't have the JS side equivalent to support its features, and if we allow passing the options we can't restrict which ones customers will use. |
If it's always the case that the serialized value encapsulates internal values, then yes. If you're just looking to serialize a user-defined |
Thanks for the feedback, everyone - it sounds like a variation of option 1 is what we're leaning toward, but one that accepts a We might even be able to directly expose a method on var jsonOptions = jsRuntime.CopyJsonSerializerOptions();
jsonOptions.TypeInfoResolver = MyJsonSerializerContext.Default;
// Could do this...
await jsRuntime.InvokeVoidAsync(jsonOptions, "myJSFunction", arg1, arg2, ...);
// or...
var myJSRuntime = jsRuntime.WithJsonSerializerOptions(jsonOptions);
myJSRuntime.InvokeVoidAsync("myJSFunction", arg1, arg2, ...); |
cd5974b
to
d758ef8
Compare
574694d
to
04910bb
Compare
Marking as ready for review, but I will add tests soon to verify these changes. I took the following approach, based on feedback:
I'll send an API review email after this PR gets approved. |
@@ -7,6 +7,8 @@ | |||
<GenerateDocumentationFile>true</GenerateDocumentationFile> | |||
<RootNamespace>Microsoft.AspNetCore.Components</RootNamespace> | |||
<Nullable>enable</Nullable> | |||
<!-- SYSLIB0020: JsonSerializerOptions.IgnoreNullValues is obsolete --> | |||
<NoWarn>$(NoWarn);SYSLIB0020</NoWarn> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this warning happen? I can't see any usage of IgnoreNullValues
in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This warning was coming from generated code, but it looks like that's not happening anymore for some reason. I've just removed this.
[JsonSerializable(typeof(int))] | ||
[JsonSerializable(typeof(Dictionary<string, JSComponentConfigurationStore.JSComponentParameter[]>))] | ||
[JsonSerializable(typeof(Dictionary<string, List<string>>))] | ||
internal sealed partial class WebRendererSerializerContext : JsonSerializerContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about how you think we should maintain this in the future. How would a developer know when/what to add to this list of types? Am I right to think that if they start serializing something different and fail to add it here, everything would still work but would just be slower? Or would some error occur so they know to change this code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along the same lines, how did you even know to include these specific types? I'm hoping it's because if you don't, there's a clear error indicating they need to be included!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would a developer know when/what to add to this list of types?
These types match the types of the arguments passed to the JS interop call. Failing to do this results in a runtime error. However, I've just pushed a change that makes it even clearer where these types come from, so hopefully that eliminates any confusion. If we end up reverting that change, I can add a comment explaining how these types should be specified.
|
||
// Required for DefaultWebAssemblyJSRuntime | ||
[JsonSerializable(typeof(RootComponentOperationBatch))] | ||
internal sealed partial class WebAssemblyJsonSerializerContext : JsonSerializerContext; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same maintainability question as for WebRendererSerializerContext
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #54956 (comment)
/// </param> | ||
/// <param name="args">JSON-serializable arguments.</param> | ||
/// <returns>An instance of <typeparamref name="TValue"/> obtained by JSON-deserializing the return value.</returns> | ||
ValueTask<TValue> InvokeAsync<[DynamicallyAccessedMembers(JsonSerialized)] TValue>(string identifier, JsonSerializerOptions options, CancellationToken cancellationToken, object?[]? args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we doing anything to warn people if they try to use a JsonSerializerOptions
that isn't configured with the necessary converters for Blazor's internal functionality, e.g., ElementReference
, JSObjectReference
, etc? It won't be obvious that you can't just create your own JsonSerializerOptions
from scratch and use it here.
If I spot this elsewhere in the code I'll come back and remove this question!
|
||
namespace Microsoft.JSInterop; | ||
|
||
internal sealed class JSInteropTask<TResult> : IJSInteropTask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this means we're doing a few extra allocations per JS interop call, right?
Not suggesting that's a flaw in the design, just want to check my assumptions. If this is a requirement to work properly with AOT and gives people to improve the perf further using source-generated serializers, that would outweigh a few allocations.
Yeah that does seem problematic. The whole point of using
On the grounds that we're not trying to solve all possible problems here, and that people have been asking for a variant that accepts a |
From my point of view this looks good. I posted a couple of questions but expect they are not hard to resolve. I would mark it as "approved" now but since there are still active lines of questioning from @eiriktsarpalis and @halter73 I'll let you complete those discussions first in case it leads to any broader design changes. Thanks for being so thorough and careful in the design here. It's always delicate changing something so intrinsic to performance when there are complex back-compat and ecosystem concerns. I think you've made good decisions to reach this point. |
I doubt that this change would contribute to any speedups, the transcoding implementation is shared between reflection and source gen. If anything, this should be contributing to slow-downs since you're now allocating an intermediate |
private DefaultWebAssemblyJSRuntime() | ||
{ | ||
ElementReferenceContext = new WebElementReferenceContext(this); | ||
JsonSerializerOptions.Converters.Add(new ElementReferenceJsonConverter(ElementReferenceContext)); | ||
JsonSerializerOptions.MakeReadOnly(populateMissingResolver: JsonSerializer.IsReflectionEnabledByDefault); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we passing JsonSerializer.IsReflectionEnabledByDefault);
.
We should always have reflection enabled by default. That's not something the framework will work without, so its best if it can't be configured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This property reflects the value of the JsonSerializerIsReflectionEnabledByDefault
feature switch/property (which is controlled among other things by the value of PublishTrimmed
).
I think JsonSerializerOptions.MakeReadOnly(populateMissingResolver: true);
should suffice here.
ValueTask<TValue> InvokeAsync<[DynamicallyAccessedMembers(JsonSerialized)] TValue>(string identifier, JsonSerializerOptions options, object?[]? args) | ||
=> throw new InvalidOperationException($"Supplying a custom {nameof(JsonSerializerOptions)} is not supported by the current JS runtime"); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was my understanding that we don't want to do this at all.
Is this required to improve the perf startup? How much improvement do we lose if we don't have it?
Adding support for passing custom JsonSerializer options to JSInvoke method calls broadens the scope of this feature quite a lot and opens us up to a lot of cases where things can go wrong in subtle and hard to debug ways.
For that reason, I would prefer if we stuck to making whatever functionality we need internal and maintain our current policy that this part of the system is not configurable, and that if you want to perform custom serialization you can perform the serialization yourself and pass in a byte array.
I also don't want us to live in a world where this is supported in some platforms and not in others. Every time we introduce something like that, its a headache for customers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My interpretation was that adding this to IJSRuntime
is necessary for the layering to continue to make sense. I was assuming it's not possible to restrict this to DefaultWebAssemblyJSRuntime
since the relevant calls come from other layers, but if that's incorrect and it can in fact be restricted that way it's worth considering.
I also don't want us to live in a world where this is supported in some platforms and not in others. Every time we introduce something like that, its a headache for customers.
This wouldn't be an issue in practice. This PR adds the support to the underlying JSRuntime
base class which we inherit in all the supported platforms. It would impact any 3rd-party IJSRuntime
implementations, but that's realistically only in tests.
- In unit tests, the new calls would not be exercised unless people start using the new APIs, so it's not an issue.
- In integration tests like bUnit, I'm unsure whether the new calls would be exercised, but I think bUnit would want to add support for this anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After taking another look, here is my high-level feedback.
- Unless it is strictly necessary to improve startup performance, we should avoid adding public APIs to JS interop that take in custom JSON options.
- The JS interop APIs have been "mostly if not totally agnostic" of
System.Text.Json
and that's an invariant we shouldn't break. - I don't want us to have to deal with all the things that can go wrong when people use their own JS interop options. There is another side of the coin every time we perform JS interop, which is the JavaScript side, which does not offer any customization.
Using the APIs ourselves internally is reasonable, exposing them to customers is a much bigger change that drastically increases the impact of this change, and I'm concerned of all the things that can go wrong that we don't know about (and won't be able to take back).
So, I think we should consider scoping this down even further to rely on the SG context only for framework and internal calls unless strictly necessary otherwise, and in that case, I believe we need to include an explicit call-out for the set of features that will be supported and those which won't. Otherwise, it'll be confusing when people configure types to serialize graphs etc. and don't get expected results.
A way to phrase this could be "JSinterop only supports serializing and deserializing types using default serialization settings from System.Text.Json".
Finally, if we were to do this, rather than pass the settings via the calls, could we do this via DI by exposing a generic type parameter? JSRuntime<TOptions> where TOptions : ...
and then have either a helper method on DI to register a set of options that resolves an instance configured with a particular set of options.
That way we avoid having the options on the main method APIs and we still allow libraries and other parts of the system to isolate their usage of the options from the system.
(We would register something like services.AddScoped(typeof(IJSRuntime<>,WebAssemblyJSRuntime<> and then services.RegisterJSInterop<MyOptions>()
would add MyOptions to DI)
I appreciate all the thoughtful feedback that everyone's put into this! It's clear that there's still a lot of uncertainty around what the final API should look like, so I've reduced this PR down to a much more minimal one that specifically targets WebAssembly. I also removed the The only new API is now an We could always revisit adding new JS interop APIs in the future, and now we have the context of this thread to pull from when landing on the final design. |
It wasn't clear that this led to a real perf benefit in optimized builds
// JsonTypeInfo for all the types referenced by both DotNetObjectReference<T> and its | ||
// generic type argument. | ||
|
||
var newJsonOptions = new JsonSerializerOptions(jsonOptions); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had another commit that eliminated this extra JsonSerializerOptions
and serialized each argument individually, passing a JsonTypeInfo
for the argument directly and avoiding serializing an object[]
. There appeared to be a perf benefit when running in the profiler, but it seemed to actually slightly degrade perf on optimized builds (but it was hard to tell), so I reverted it to just go with the simpler approach.
32d0660
to
f08d858
Compare
newJsonOptions.TypeInfoResolverChain.Add(WebRendererSerializerContext.Default); | ||
newJsonOptions.TypeInfoResolverChain.Add(JsonConverterFactoryTypeInfoResolver<DotNetObjectReference<WebRendererInteropMethods>>.Instance); | ||
var argsJson = JsonSerializer.Serialize(args, newJsonOptions); | ||
inProcessRuntime.InvokeJS(JSMethodIdentifier, argsJson, JSCallResultType.JSVoidResult, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we were to avoid using the SG for this call, how big is the performance penalty?
This is a single call per app, and we are creating a bunch of interfaces to support it. Might not be worth the extra infrastructure just for this.
Alternatively, the renderer could expose a virtual AttachWebRenderer
interop method that WebAssembly could override. If that were to be the case then we only expose a single method, not an extra interface. The differences would be that the method is likely less accessible and doesn't sit on the JSRuntime.
Either way it's not a big deal IMO. If there is not a big penalty for avoiding this call through regular means, I would do that. If we benefit significantly, I will keep it and probably switch to use the virtual method instead of the extra interface, but its not a big deal eitherway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we were to avoid using the SG for this call, how big is the performance penalty?
This is a single call per app
Assuming that the goal of this change is to improve startup costs, it shouldn't matter how many times a particular call is made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's about the trade-off of having an extra interface and coupling vs saving a couple MS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saving a couple of MS sounds like a desirable goal assuming that it's a cost incurred by default for every wasm app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That couple-of-ms happens client-side as part of a several-hundred-ms startup process. It doesn't impact server CPU usage and hence mostly matters only to the extent that humans can notice it. So realistically it wouldn't be top of the queue of things for us to optimize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we were to avoid using the SG for this call, how big is the performance penalty?
Locally, I just measured about an 80ms difference on published builds. Not sure exactly how much that difference changes on slower machines, but it seems significant. To put that number into perspective, total startup blocking time with that optimization is (locally) ~330ms.
Alternatively, the renderer could expose a virtual AttachWebRenderer interop method that WebAssembly could override.
We could, it would just mean that if we wanted to do a similar optimization in another area of the framework, we'd have to add additional specialized API (I count 9 other calls to InvokeVoidAsync
in Components.Web
, but those just happen to not occur during startup).
Is it fine if we proceed with this for now and discuss it further in API review? I think it's an early enough preview where we could change the API later if needed, especially since it's annotated as [EditorBrowsable(Never)]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, I leave it up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, I have a minimal remaining comment but it's pretty much ready to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent - thanks for being so flexible and yet precise about all this.
Updates Blazor WebAssembly startup to utilize the WebAssembly source generator as much as possible.
Opening as a draft for now, as there's room for further optimization.