Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GpuArrayBuffer and BatchedUniformBuffer #8204

Merged
merged 27 commits into from
Jul 21, 2023
Merged

Conversation

JMS55
Copy link
Contributor

@JMS55 JMS55 commented Mar 25, 2023

Objective

  • Add a type for uploading a Rust Vec<T> to a GPU array<T>.
  • Makes progress towards GPU Instancing #89.

Solution

  • Port @superdump's BatchedUniformBuffer to bevy main, as a fallback for WebGL2, which doesn't support storage buffers.
    • Rather than getting an array<T> in a shader, you get an array<T, N>, and have to rebind every N elements via dynamic offsets.
  • Add GpuArrayBuffer to abstract over StorageBuffer<Vec<T>>/BatchedUniformBuffer.

Future Work

Add a shader macro kinda thing to abstract over the following automatically: #8204 (review)


Changelog

  • Added GpuArrayBuffer, GpuComponentArrayBufferPlugin, GpuArrayBufferable, and GpuArrayBufferIndex types.
  • Added DynamicUniformBuffer::new_with_alignment().

@JMS55 JMS55 added this to the 0.11 milestone Mar 25, 2023
@JMS55 JMS55 added C-Feature A new feature, making something new possible A-Rendering Drawing game state to the screen labels Mar 25, 2023
@JMS55 JMS55 requested a review from superdump March 25, 2023 01:56
@superdump superdump requested a review from robtfm March 25, 2023 04:46
Copy link
Contributor

@robtfm robtfm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code and comment quality is good. the code broadly looks right but i really need an example to kick it around properly. i don't think the feature needs an example in the repo but maybe you have something you've been using while building it that i could look at?

crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
@JMS55
Copy link
Contributor Author

JMS55 commented Mar 27, 2023

I have some messy code here that uses GpuList for MeshUniforms. If it's not helpful, I can write up a separate example. The next step after this PR will be to use GpuList for mesh MeshUniforms, and then materials.

@JMS55
Copy link
Contributor Author

JMS55 commented Mar 29, 2023

Reminder to myself that I need to add robswain to the commit authors.

@JMS55
Copy link
Contributor Author

JMS55 commented Mar 31, 2023

I kind of fixed the commit history... good enough ig?

Copy link
Contributor

@superdump superdump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few small changes. Otherwise LGTM. I look forward to using this. :)

One nice to have though I'm not sure how to do it - it would be useful to have a nice shader abstraction for it. I suppose the shader side will be either:

var<uniform> my_list: array<T, #{T_BATCH_SIZE}>;

or

var<storage> my_list: array<T>;

but either way, the array will be indexed into, so the interface should be the same. So we need a way of setting the binding type to either uniform or storage, and if uniform then setting the array size. Currently I think one could do that like this:

#ifdef T_BATCH_SIZE
var<uniform> my_list: array<T, #{T_BATCH_SIZE}u>;
#else
var<storage> my_list: array<T>;
#endif

Comment on lines +96 to +120
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, PartialOrd, Ord)]
struct MaxCapacityArray<T>(T, usize);

impl<T> ShaderType for MaxCapacityArray<T>
where
T: ShaderType<ExtraMetadata = ArrayMetadata>,
{
type ExtraMetadata = ArrayMetadata;

const METADATA: Metadata<Self::ExtraMetadata> = T::METADATA;

fn size(&self) -> ::core::num::NonZeroU64 {
Self::METADATA.stride().mul(self.1.max(1) as u64).0
}
}

impl<T> WriteInto for MaxCapacityArray<T>
where
T: WriteInto + RuntimeSizedArray,
{
fn write_into<B: BufferMut>(&self, writer: &mut Writer<B>) {
debug_assert!(self.0.len() <= self.1);
self.0.write_into(writer);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code was written by @teoxoy so we need to add credit for them to the commit that introduces it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! If this is ready for production I can merge the branch in encase and do a release.
Let me know!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It works fine for us. :) There is that other aspect of being able to start the next dynamic offset binding of a uniform buffer at the next dynamic offset alignment if not all space is used, and ensure that the final binding is full-size. I don't know if that would clash with this and basically immediately deprecate this approach. If so maybe you'd prefer that we use a solution in bevy for what we need and add the long-term and more flexible solution to encase when someone gets to it. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I won't block the PR on this. We can figure it out over time. :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to rebase to give credit on the original commit but due to merges it was a pain. I instead added a comment and a co-authored-by so that when the squash merge is done, the credit will follow along with it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, we can further iterate and see what we come up with. Thanks for the credit!

crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/gpu_list.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/storage_buffer.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/storage_buffer.rs Outdated Show resolved Hide resolved
crates/bevy_render/src/render_resource/uniform_buffer.rs Outdated Show resolved Hide resolved
@JMS55
Copy link
Contributor Author

JMS55 commented Apr 24, 2023

One nice to have though I'm not sure how to do it - it would be useful to have a nice shader abstraction for it. I suppose the shader side will be either:

Sadly we don't have a macro system (nor am I sure if we'd want one, those are easy to abuse...), so I'm not sure how we'd do that. Maybe some kind of custom thing in our shader parser, like this:

var<gpu_list> my_list: GpuList<T>;

that the parser replaces with the appropriate stuff, before passing it to naga.

@superdump
Copy link
Contributor

JMS55 confirmed on Discord that I can commit the two outstanding proposals and rebase this PR to get it merged as they are busy with other things at the moment.

@JMS55 JMS55 added this to the 0.12 milestone Jun 11, 2023
@IceSentry
Copy link
Contributor

Haven't reviewed it yet, so might be missing context, but I do think I'd prefer GpuArrayBuffer. GpuList also confused me for the longest time when seeing the PR title and I didn't really look into it because I thought it was something about listing gpus.

@JMS55 JMS55 changed the title Add GpuList and BatchedUniformBuffer Add GpuArrayBuffer and BatchedUniformBuffer Jun 26, 2023
@JMS55
Copy link
Contributor Author

JMS55 commented Jun 26, 2023

Due to popular demand, GpuList -> GpuArrayBuffer. I think this PR is ready to go now :)

@JMS55 JMS55 added the S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it label Jun 26, 2023
Copy link
Contributor

@IceSentry IceSentry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couple of tiny nits, but otherwise LGTM

Copy link
Contributor

@superdump superdump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's get this merged after this docs change, pending another approval.

@robtfm robtfm self-requested a review July 21, 2023 11:42
@superdump superdump mentioned this pull request Jul 21, 2023
@superdump superdump added this pull request to the merge queue Jul 21, 2023
Merged via the queue into bevyengine:main with commit ad011d0 Jul 21, 2023
20 checks passed
github-merge-queue bot pushed a commit that referenced this pull request Sep 25, 2023
# Objective
This is a minimally disruptive version of #8340. I attempted to update
it, but failed due to the scope of the changes added in #8204.

Fixes #8307. Partially addresses #4642. As seen in
#8284, we're actually copying
data twice in Prepare stage systems. Once into a CPU-side intermediate
scratch buffer, and once again into a mapped buffer. This is inefficient
and effectively doubles the time spent and memory allocated to run these
systems.

## Solution
Skip the scratch buffer entirely and use
`wgpu::Queue::write_buffer_with` to directly write data into mapped
buffers.

Separately, this also directly uses
`wgpu::Limits::min_uniform_buffer_offset_alignment` to set up the
alignment when writing to the buffers. Partially addressing the issue
raised in #4642.

Storage buffers and the abstractions built on top of
`DynamicUniformBuffer` will need to come in followup PRs.

This may not have a noticeable performance difference in this PR, as the
only first-party systems affected by this are view related, and likely
are not going to be particularly heavy.

---

## Changelog
Added: `DynamicUniformBuffer::get_writer`.
Added: `DynamicUniformBufferWriter`.
@cart cart mentioned this pull request Oct 13, 2023
43 tasks
rdrpenguin04 pushed a commit to rdrpenguin04/bevy that referenced this pull request Jan 9, 2024
# Objective
This is a minimally disruptive version of bevyengine#8340. I attempted to update
it, but failed due to the scope of the changes added in bevyengine#8204.

Fixes bevyengine#8307. Partially addresses bevyengine#4642. As seen in
bevyengine#8284, we're actually copying
data twice in Prepare stage systems. Once into a CPU-side intermediate
scratch buffer, and once again into a mapped buffer. This is inefficient
and effectively doubles the time spent and memory allocated to run these
systems.

## Solution
Skip the scratch buffer entirely and use
`wgpu::Queue::write_buffer_with` to directly write data into mapped
buffers.

Separately, this also directly uses
`wgpu::Limits::min_uniform_buffer_offset_alignment` to set up the
alignment when writing to the buffers. Partially addressing the issue
raised in bevyengine#4642.

Storage buffers and the abstractions built on top of
`DynamicUniformBuffer` will need to come in followup PRs.

This may not have a noticeable performance difference in this PR, as the
only first-party systems affected by this are view related, and likely
are not going to be particularly heavy.

---

## Changelog
Added: `DynamicUniformBuffer::get_writer`.
Added: `DynamicUniformBufferWriter`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

8 participants