Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ComputeBuffer.BeginWrite/EndWrite adaptation #136

Open
krisrok opened this issue Oct 27, 2021 · 3 comments
Open

ComputeBuffer.BeginWrite/EndWrite adaptation #136

krisrok opened this issue Oct 27, 2021 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@krisrok
Copy link

krisrok commented Oct 27, 2021

This is more of a question(/hope to be discussion) as i know @keijiro knows his way around Unity‘s bleeding edge APIs :)

I noticed the call to ComputeBuffer.SetData() inside the frame decoding brings perfomance down when having multiple streams or higher resolutions as it stalls the main thread.
I had a look around and Unity offers some kind of experimental way to enable writing to the ComputeBuffer in an async manner: https://docs.unity3d.com/2020.1/Documentation/ScriptReference/ComputeBuffer.BeginWrite.html

I know 4K streams or similar are not your intended usecase but maybe that‘d be a viable optimization. Happy to hear your thoughts.

@keijiro keijiro self-assigned this Oct 28, 2021
@keijiro keijiro added the question Further information is requested label Oct 28, 2021
@keijiro
Copy link
Owner

keijiro commented Oct 28, 2021

My understanding is that ComputeBuffer.BeginWrite is not an async way but just exposing Unity's internal buffer to C#. I guess that you can get performance gain when using a unified memory architecture (e.g. Apple M1) but not much on discrete GPUs.

Apart from that, it would reduce memory pressure especially in multi 4K situations. I'll consider using it in future updates.

@keijiro keijiro changed the title Async GPU upload ComputeBuffer.BeginWrite/EndWrite adaptation Oct 28, 2021
@krisrok
Copy link
Author

krisrok commented Oct 28, 2021

I kept on looking for information about it and this thread on the forums seems to be a pretty good resource. Although the unity engineer states there's still much to be desired in this particular field.

Seems you can define a buffer as cpu-write-only and gpu-read-only. I do not know enough about the deeper tech side of things but the unity staff also states the following which I think is a hit for the application in the NDI decoder:

Reading from it is slower on GPU but if you need to perform an upload every frame regardless it can make sense to use it.

@keijiro keijiro added enhancement New feature or request and removed question Further information is requested labels Nov 1, 2021
@keijiro
Copy link
Owner

keijiro commented Nov 1, 2021

Thanks for the information. It makes sense, and I'd like to deep-dive into it in future updates, but it wouldn't happen very soon. There are some reasons:

  • There is no firm information about the optimization. I would have to spend a significant amount of time on research and development.
  • I guess that the optimization is platform/device-dependent. It would require some more amount of time for profiling.
  • ComputeBuffer is going to be superseded by GraphicsBuffer. I should rewrite some parts of the package to use GraphicsBuffer, and I want to do it before the optimization work.
  • GraphicsBuffer.BeginWrite hasn't been implemented yet. I heard it would come in Unity 2022.1.

I want to keep this issue ticket open as "enhancement".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants