You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
.NET apps, at the very end of the day, have to communicate with native APIs. Not everything is done in managed code. Many third-party libraries and SDKs are written in code that gets compiled to native CPU instructions, and its not just C/C++ code, there is also a trend now to rewrite common libraries in Rust.
Some platforms have a concept of "fat binaries" or "universal binaries", which are binaries that contain CPU instructions for multiple CPUs.
In Apple's environments this has been a feature since NeXTSTEP 3.1, and has been critical in the migration from PowerPC processor to Intel processors, from Intel 32-bit to Intel 64-bit, and again now from Intel processors to Apple Silicon (Arm). It has also been used heavily (though relatively quietly) in iPhone OS/iOS/iPadOS as Apple have moved through armv6, armv7, armv7s and arm64.
Though I'm not familiar with the details, dotnet/sdk#16896 suggests that Windows now has a similar mechanism to produce universal binaries.
As it currently stands I believe that the .NET Runtime does not take this into consideration when searching for files when attempting to load native binaries, which results in extra development effort or wasted resources.
Use-Case
Let's say I have a really cool library named Foo, as all great things are named. For Windows I would generally produce a x86 foo.dll and a x64 foo.dll. For Linux I would generally produce a i386 libfoo.so and a x86-64 libfoo.so.
For macOS, there is no need to follow this pattern. I can produce a single libfoo.dylib that contains x86 code, x86_64 code, and arm64e code all in the same file. For iOS it is similar but with different platforms.
Package managers or proprietary 3rd-party SDKs may ship in this format, so this becomes input for the .NET SDK and .NET Runtime.
As it stands, I believe there are only two ways to consume such a binary, neither of which are satisfactory:
1. Duplicate the binary:
I can take the existing libfoo.dylib and copy it into multiple locations. I don't know the runtime paths off the top of my head but for a NuGet package I would need to copy it to:
runtimes/osx-x64/native/libfoo.dylib
runtimes/osx-arm64/native/libfoo.dylib
This is wasteful as it consumes:
additional disk space when unpacked
additional disk space when packaged, as ZIP files (.nupkg) do not apply compression across multiple entries,
additional network transfer (see above, additional disk space when packaged)
2. Split the binary:
I can take the existing libfoo.dylib and use lipo to split it into multiple slices. I can then place each of these slices at runtimes/osx-<arch>/native/libfoo.dylib.
This requires additional effort on the consumer's part to subvert the native operating system's capabilities and split the binary into separate files just so that .NET's loader will find them in the correct locations.
Proposal
I would like to suggest that .NET should be able to consume universal binaries as-is, without duplication or splitting. In the example above, I expect there would be a universal RID defined for each platform (e.g. osx or osx-universal, ios or ios-universal, etc.) and I can place the native assembly in that folder.
Then, when .NET attempts to load the universal binary, the native operating system dynamic library loader should automatically select the correct architecture slice to match the CPU architecture of the .NET runtime process, as it does for any other application/platform/runtime.
The text was updated successfully, but these errors were encountered:
Lifting this out of the discussion at dotnet/designs#217:
Overview
.NET apps, at the very end of the day, have to communicate with native APIs. Not everything is done in managed code. Many third-party libraries and SDKs are written in code that gets compiled to native CPU instructions, and its not just C/C++ code, there is also a trend now to rewrite common libraries in Rust.
Some platforms have a concept of "fat binaries" or "universal binaries", which are binaries that contain CPU instructions for multiple CPUs.
In Apple's environments this has been a feature since NeXTSTEP 3.1, and has been critical in the migration from PowerPC processor to Intel processors, from Intel 32-bit to Intel 64-bit, and again now from Intel processors to Apple Silicon (Arm). It has also been used heavily (though relatively quietly) in iPhone OS/iOS/iPadOS as Apple have moved through armv6, armv7, armv7s and arm64.
Though I'm not familiar with the details, dotnet/sdk#16896 suggests that Windows now has a similar mechanism to produce universal binaries.
As it currently stands I believe that the .NET Runtime does not take this into consideration when searching for files when attempting to load native binaries, which results in extra development effort or wasted resources.
Use-Case
Let's say I have a really cool library named Foo, as all great things are named. For Windows I would generally produce a x86 foo.dll and a x64 foo.dll. For Linux I would generally produce a i386 libfoo.so and a x86-64 libfoo.so.
For macOS, there is no need to follow this pattern. I can produce a single
libfoo.dylib
that contains x86 code, x86_64 code, and arm64e code all in the same file. For iOS it is similar but with different platforms.Package managers or proprietary 3rd-party SDKs may ship in this format, so this becomes input for the .NET SDK and .NET Runtime.
As it stands, I believe there are only two ways to consume such a binary, neither of which are satisfactory:
1. Duplicate the binary:
I can take the existing
libfoo.dylib
and copy it into multiple locations. I don't know the runtime paths off the top of my head but for a NuGet package I would need to copy it to:This is wasteful as it consumes:
.nupkg
) do not apply compression across multiple entries,2. Split the binary:
I can take the existing
libfoo.dylib
and uselipo
to split it into multiple slices. I can then place each of these slices atruntimes/osx-<arch>/native/libfoo.dylib
.This requires additional effort on the consumer's part to subvert the native operating system's capabilities and split the binary into separate files just so that .NET's loader will find them in the correct locations.
Proposal
I would like to suggest that .NET should be able to consume universal binaries as-is, without duplication or splitting. In the example above, I expect there would be a universal RID defined for each platform (e.g.
osx
orosx-universal
,ios
orios-universal
, etc.) and I can place the native assembly in that folder.Then, when .NET attempts to load the universal binary, the native operating system dynamic library loader should automatically select the correct architecture slice to match the CPU architecture of the .NET runtime process, as it does for any other application/platform/runtime.
The text was updated successfully, but these errors were encountered: