Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

re_renderer: detect badly sized texture buffers before handing to wgpu #4005

Merged
merged 2 commits into from
Oct 26, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 54 additions & 15 deletions crates/re_renderer/src/resource_managers/texture_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,18 @@ pub enum TextureCreationError {
max_texture_dimension_2d: u32,
},

#[error(
"Wrong number of bytes in the texture data buffer - expected {width}x{height}x{bytes_per_texel}={expected_bytes}, got {actual_bytes}"
)]
WrongBufferSize {
width: u32,
height: u32,
bytes_per_texel: u32,
expected_bytes: usize, // product of the avbove

actual_bytes: usize,
},

#[error(
"Texture with debug label {label:?} has a format {format:?} that data can't be transferred to!"
)]
Expand Down Expand Up @@ -346,52 +358,79 @@ impl TextureManager2D {
) -> Result<GpuTexture2D, TextureCreationError> {
re_tracing::profile_function!();

if creation_desc.width == 0 || creation_desc.height == 0 {
return Err(TextureCreationError::ZeroSize(creation_desc.label.clone()));
let Texture2DCreationDesc {
label,
data,
format,
width,
height,
} = creation_desc;
let (width, height, format) = (*width, *height, *format);

if width == 0 || height == 0 {
return Err(TextureCreationError::ZeroSize(label.clone()));
}

let max_texture_dimension_2d = device.limits().max_texture_dimension_2d;
if creation_desc.width > max_texture_dimension_2d
|| creation_desc.height > max_texture_dimension_2d
{
if width > max_texture_dimension_2d || height > max_texture_dimension_2d {
return Err(TextureCreationError::TooLarge {
width: creation_desc.width,
height: creation_desc.height,
width,
height,
max_texture_dimension_2d,
});
}

if !format.is_compressed() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why don't we use our Texture2DBufferInfo class here instead and check for its buffer_size_unpadded field?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not? Because nothing in this function uses Texture2DBufferInfo, so it didn't occur to me. Also it requires me to write more code and understand what buffer_size_unpadded is in relation to the input data.

Counter-question: why use Texture2DBufferInfo and buffer_size_unpadded here? Would it make the code shorter? Or more correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it sure would make it more correct. As is the code will fail again in the future when somebody comes with a compressed format in which case we'll don't do any check at all then

Copy link
Member

@Wumpf Wumpf Oct 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because nothing in this function uses Texture2DBufferInfo, so it didn't occur to me.

it probably should though, the function predates this utility :)

As for understanding buffer_size_unpadded: The code sadly already requires you to understand this since we deal with two different kind of widths in here, the padded and the unpadded one.
Looks like we offloaded that to wgpu by now here. The variable names are just reminiscent of when we didn't

Copy link
Member

@Wumpf Wumpf Oct 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean it's not a big issue at all, I approved the PR didn't I? 😄

if let Some(bytes_per_texel) = creation_desc
.format
.block_size(Some(wgpu::TextureAspect::All))
{
let expected_bytes = width as usize * height as usize * bytes_per_texel as usize;

if data.len() != expected_bytes {
return Err(TextureCreationError::WrongBufferSize {
width,
height,
bytes_per_texel,
expected_bytes,

actual_bytes: data.len(),
});
}
}
}

let size = wgpu::Extent3d {
width: creation_desc.width,
height: creation_desc.height,
width,
height,
depth_or_array_layers: 1,
};
let texture = texture_pool.alloc(
device,
&TextureDesc {
label: creation_desc.label.clone(),
label: label.clone(),
size,
mip_level_count: 1, // TODO(andreas)
sample_count: 1,
dimension: wgpu::TextureDimension::D2,
format: creation_desc.format,
format,
usage: wgpu::TextureUsages::TEXTURE_BINDING | wgpu::TextureUsages::COPY_DST,
},
);

let width_blocks = creation_desc.width / creation_desc.format.block_dimensions().0;
let width_blocks = width / format.block_dimensions().0;
let block_size = creation_desc
.format
.block_size(Some(wgpu::TextureAspect::All))
.ok_or_else(|| TextureCreationError::UnsupportedFormatForTransfer {
label: creation_desc.label.clone(),
format: creation_desc.format,
label: label.clone(),
format,
})?;
let bytes_per_row_unaligned = width_blocks * block_size;

// TODO(andreas): Once we have our own temp buffer for uploading, we can do the padding inplace
// I.e. the only difference will be if we do one memcopy or one memcopy per row, making row padding a nuisance!
let data: &[u8] = creation_desc.data.as_ref();
let data: &[u8] = data.as_ref();

// TODO(andreas): temp allocator for staging data?
// We don't do any further validation of the buffer here as wgpu does so extensively.
emilk marked this conversation as resolved.
Show resolved Hide resolved
Expand Down
Loading