-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serializer support for easier object and collection converters #1562
Comments
Moving the status of this extensibility model feature to future per discussion @rynowak. Instead we will implement built-in converters for those in |
This issue needs to be re-worked to add latest thoughts and extended to support custom deserialization of objects (in addition to enumerable). In addition, a custom converter for enumerable or object using a new extensibility model would prevent the "read-ahead" performance hit that we currently have where we need to pre-read the JSON in order to ensure Stream-based scenario do not run out of data\JSON during the custom converter. |
It would also be nice to get something similar to old JSON.NET's |
I assume this is because different collections may need different converters for their elements. Otherwise, you'd just specify a converter for the element type which would be used for both single object and collection elements. |
@steveharter, sorry, I don't fully understand what you are trying to say. There is a difference between [JsonConverter(typeof(MyObjectsCollectionJsonConverter))]
public IEnumerable<MyObject> CollectionOfObjects { get; set; } and [JsonProperty(ItemConverterType = typeof(MyObjectJsonConverter))]
public IEnumerable<MyObject> CollectionOfObjects { get; set; } First converter is expected to return an If I'm not missing any APIs - first one is possible and second one is not today. What I was trying to say is that second scenario is very convenient, as it allows you to reuse one converter for an object and collection of objects. You can typically do it by registering the converter for type externally with the serializer, but if |
Today the converter for I'm assuming this wouldn't work for your scenario? |
|
Will the introduction of internal collection converters help against the boxing of array elements? I am sending var data = new int[1024];
for (var i = 0; i < 30; ++i)
JsonSerializer.Serialize(data); and was quite shocked to see that in 3.1, this allocates 24kiB alone by boxing all 1024 elements individually for each invocation. This easily dwarfs all other allocations including those for the string holding the result. Luckily it is rather simple in my code base to wrap the arrays in a type with one property annotated with |
Yes array perf was one of the main drivers for this refactoring. Currently 5.0 (master) will no longer box array elements or the array enumerator. Using your example but wrapping in a loop of 100 in 3.1 generates:
5.0 generates:
Also CPU perf is improved with large collections. Serializing a single 10,000 element Int32 array:
|
I see. You would like the ability to specify the converter for a given element type in a collection. The serializer design supports this in a way since a collection converter knows about its converter for the underlying element type, but it always obtains it from the global set of converters. But currently it doesn't support an attribute or a run-time mechanism to specify the converter type for an element of a particular collection type or a specific property. Without a feature to address this there are two ways to work-around this limitation:
|
Closing as the refactoring work was performed in #2259. As mentioned in that PR, a new issue will be created for any new APIs or features that layer on this refactoring. |
Extend the existing converter model for collections and objects. The current converter model (with base class
JsonConverter
) was intended primarily to support value types, not collections and objects.Extending the converter model to objects and collections will have the following benefits:
List<T>.Enumerator
and less re-processing of logic including determining what type of collection is being (de)serialized.JsonException
properties.GetConverter
API that allows one converter to forward to another converter.JsonSerializationOptions.GetConverter()
. Currently, the built-in support for objects and collections do not have any converters that are returned fromGetConverter()
since that logic currently exists in the "main loop" and thus requires a converter that wants to forward to those to manually re-enter the main (De)Serialize methods which is slow and has issues including losing "json path" semantics for exceptions.MakeGenericType
in order to call a converter). For example, a converter that implementsSystem.Runtime.Serialization.ISerializable
. It is also used internally for the root object being returned for the non-generic deserialize\serialize methods.Backgound
Currently all internal value-type converters (e.g. String, DateTime, etc) use the same converter model (base class and infrastructure) as custom converters. This means the runtime and extensibility are co-dependent which allows for shared logic and essentially no limit on what can be done within a converter. The existing converter model has proven itself very flexible and performant.
However, currently collections and objects do not use the converter model. Instead they are implemented within a "main loop" consisting of static methods along with state classes.
The state classes (
ReadStack
,ReadStackFrame
,WriteStack
,WriteStackFrame
) exist to support async-based streams where the call stack may need to unwind in order to read async or flush async from the underling stream, and then continue once that is done. This is done to keep memory requirements low and increase throughput for large streams -- the serializer does not "drain" the stream ahead of time and instead has first-class support for streams and async.The state classes along with the converter design support shared code for both sync and async support. This prevents having to write both an async and sync converter, for example, and prevents the overhead of using
async
and passing the stream down to almost every method. This shared code benefit applies to both the runtime and custom converters.With a new converter model, the state classes will continue to remain for unwind\continuation support, but will also work alongside the normal CLR call stack where there will be ~1 call frame for each level in JSON (for each nested JSON array or object). This makes the code more performant.
A limitation of the existing converter model is that it must "read-ahead" during deserialization to fully populate the buffer up to the end up the current JSON level. This read-ahead only occurs when the async+stream
JsonSerializer
deserialize methods are called and only when the current JSON for that converter starts with a StartArray or StartObject token. Read-ahead is required because the existing converter design does not support a mechanism to "unwind" (when data is exhausted) and "continue" (when more data is read) so the converter expects all data to be present, and expects that reader.Read() will never return false due to running out of data in the current buffer. If the converter does not start with StartArray or StartObject, then it is assumed the converter will not call reader.Read().Similarly, a limitation of the existing converter model is that it does not support async flush of the underlying stream for the converter's write methods. Again, this only applies to the async+stream case and only when the converter performs multiple write operations that may hit a threshold. Note that the built-in implementation for object and collections (which do not use converters) do support async flush (and async read) when thresholds are hit, but converters do not.
Proposed API
The ReadStack* and WriteStack* structs will likely be renamed and have a few members such as obtaining the JsonPath, dictionary for reference handling, and state used for continuation after an async read\flush.
In addition to the above base classes, there will likely be:
The text was updated successfully, but these errors were encountered: