-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Description
Background and motivation
Tensors were designed to be able to be used in a fixed statement. This is great for performance, but there are times when we need the data to be fixed but using it in a fixed statement will not work. Currently, when you create a tensor you can have the GC actually pin the underlying memory, but once the tensor has been created we have no way of pinning it after the fact. We need to expose a way to allow the underlying memory be pinned for the cases that a fixed statement wont work.
API Proposal
namespace System.Numerics.Tensors;
public interface IReadOnlyTensor
{
MemoryHandle GetPinnedHandle();
}API Usage
Tensor<int> tensor = Tensor.Create<int>([1, 2, 3, 4], [2, 2]);
MemoryHandle handle = tensor.GetPinnedHandle();Alternative Designs
One other thought would be to put this in System.Runtime.InteropServices; like we have CollectionMarshal we could add a TensorMarshal so that its only used when fixed really will not work.
We could also return a GCHandle instead of a MemoryHandle. This may make certain things easier, but will limit the types of memory we can use and interact with here.
namespace System.Numerics.Tensors;
public interface IReadOnlyTensor
{
GCHandle GetPinnedHandle();
}Currently, we have an interface IPinnable. This is a nice idea, but doesn't work well for the Tensor multi-dimensional case, in part due to a max offset of Int32. We could consider creating a new interface, INPinnable or something, that does the same thing as IPinnable just for this multi-dimensional data.
namespace System.Numerics.Tensors;
public interface IReadOnlyTensor
{
// The api for INPinnable would still need to be decided on. Not part of this review as its just an alternative potential suggestion we could look into.
MemoryHandle Pin();
void Unpin();
}Risks
This would be a new api in a preview object, so the risks are very minimal.