-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Upgrade to wgpu 28 #22265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Upgrade to wgpu 28 #22265
Conversation
and use MipmapFilterMode in the right places
and the value is no longer a Vec because it applies to all shader stages.
| flags: instance_descriptor.flags, | ||
| memory_budget_thresholds: instance_descriptor.memory_budget_thresholds, | ||
| backend_options: instance_descriptor.backend_options.clone(), | ||
| telemetry: None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
telemetry seems to be very firefox-oriented, so I've set it to None.
from: gfx-rs/wgpu#8576
| name.starts_with("Radeon") | ||
| || name.starts_with("AMD") | ||
| || name.starts_with("Intel") | ||
| bevy_tasks::IoTaskPool::get().scope(|scope| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is due to enumerate_adapters being async now. Commenting because its probably worth a look for reviewers above the mass of other changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, yeah, interesting. We want to revisit this eventually but since it's for Vulkan blocking here should be fine?
|
Your PR increases Bevy Minimum Supported Rust Version. Please update the |
andriyDev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. AFAICT all the immediates are the correct sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't closely audit the feature/caps changes but assuming you read the changelog correctly but otherwise lgtm pending full regression passes.
| name.starts_with("Radeon") | ||
| || name.starts_with("AMD") | ||
| || name.starts_with("Intel") | ||
| bevy_tasks::IoTaskPool::get().scope(|scope| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, yeah, interesting. We want to revisit this eventually but since it's for Vulkan blocking here should be fine?
| pub push_constant_ranges: Vec<PushConstantRange>, | ||
| /// The immediate size for this pipeline. | ||
| /// Supply 0 if the pipeline doesn't use immediates. | ||
| pub immediate_size: u32, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonder if we might just add a parenthetical for search "(also known as push consants)" or something similar.
| let time_span = diagnostics.time_span(command_encoder, "dlss_super_resolution"); | ||
|
|
||
| dlss_context | ||
| let dlss_command_buffer = dlss_context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this necessary? I might be missing something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mixing the wgpu encoding API with the raw encoding API is not longer allowed, so dlss_wgpu needed some changes, and this reflects the new usage.
The features/caps changes are an entire file we copied from wgpu, so it should be resistant to small human errors in that way. |
| tracing = { version = "0.1", default-features = false, features = ["std"] } | ||
| dlss_wgpu = { version = "2", optional = true } | ||
| # dlss_wgpu = { version = "2", optional = true } | ||
| dlss_wgpu = { git = "https://github.com/ChristopherBiscardi/dlss_wgpu.git", branch = "wgpu-28", optional = true } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to land upstream and have a release before we can merge this PR.
| }; | ||
|
|
||
| render_device | ||
| let scope = render_device |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps call this variable error_scope?
| min_subgroup_size: limits | ||
| .min_subgroup_size | ||
| .max(constrained_limits.min_subgroup_size), | ||
| max_subgroup_size: limits | ||
| .max_subgroup_size | ||
| .min(constrained_limits.max_subgroup_size), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noting in-line for other reviewers that these subgroup limits were moved to adapter info.
| naga_oil = { version = "0.20", default-features = false, features = [ | ||
| # naga_oil = { version = "0.20", default-features = false, features = [ | ||
| # "test_shader", | ||
| # ] } | ||
| naga_oil = { git = "https://github.com/ChristopherBiscardi/naga_oil.git", branch = "naga-28", default-features = false, features = [ | ||
| "test_shader", | ||
| ] } | ||
|
|
||
| [target.'cfg(target_arch = "wasm32")'.dependencies] | ||
| naga_oil = { version = "0.20" } | ||
| # naga_oil = { version = "0.20" } | ||
| naga_oil = { git = "https://github.com/ChristopherBiscardi/naga_oil.git", branch = "naga-28" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noting the git repo dependencies here for visibility as something that needs to be fixed before merging this PR.
superdump
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to approve, but there is one missing wgpu capability (CULL_DISTANCE) and there are git dependencies (naga_oil and dlss_wgpu) that need to be resolved with updates and releases in those crates before merging this.
|
|
Upgrade to wgpu 28
Important
This can't merge until bevyengine/naga_oil#132 does, and the dependency is updated from my fork to the release.
Also requires wgpu 28 in dlss_wgpu: bevyengine/dlss_wgpu#17
Note
This does not enable mesh shaders, and neither does the naga_oil PR. I chose to do an upgrade first, then go back and see about mesh shaders.
Here's a general list of changes and what I did. Commits are grouped by feature except for the last one, which enabled solari when I ran the solari example.
MipmapFilterMode is split from FilterMode
Directiondocs #8314: SplitMipmapFilterModefromFilterModegfx-rs/wgpu#8314solution: implement From for
MipmapFilterMode/ImageFilterModesince the values are the same. The split was because the spec indicates they are different types, even though they have the same values.Push Constants are now Immediates
immediate_size is a u32 so use that instead of
PushConstantRangeCapabilities name changes
Got new list from https://github.com/gfx-rs/wgpu/blob/trunk/wgpu-core/src/device/mod.rs#L449 and copied it in.
subgroup_{min,max}_size moved from Limits to AdapterInfo
Update the limits to have the fields they have now, mirror the logic from the other limits calls.
multiview_mask
set to None because we don't use it currently. Its vaguely for VR.
error scope is now a guard
retain guard and then pop() it later
I made one mistake during the PR, thinking set_immediates was going to be the size of the immediates and not the offset. I'd like reviewers to take a look at immediates offset and size sites specifically just in case I missed something.
Here's a bunch of examples running
Known Issues
enable wgpu_ray_query;. Putting the declaration before#define_import_pathmeans it gets stripped out (afaict), and putting it after results inPrevious notes as dlss_wgpu was being fixed here
The wgpu release notes don't mention which PR this was introduced in, only saying:
It was caused by gfx-rs/wgpu#8373 which claimed to not know of any use cases
Possible path forward is using multiple command buffers: https://discord.com/channels/691052431525675048/743663924229963868/1453795633503670415