-
-
Notifications
You must be signed in to change notification settings - Fork 21.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integer samplers in shaders don't seem to provide integers representing the integer values set in the texture #57841
Comments
Still happens in Godot 3.5 rc8, and also happens in Godot 4.0. I use a texture to store data and did not want to use regular float Reproduced with a shader like this on
|
This bug is a pain while working with voxel data (lots of integers) in fragment shaders. Always have to do the |
After some discussion on the dev-channel, we found that Godot always binds texture as BTW, issue #69880 contains a minimal project for reproduction. |
In the current API, the texture format is determined when texture is initialized (which is most likely
|
With further discussion, I propose that
This means, the developer can create an Now technically, I think, that Vulkan might duplicate the texture on GPU. However, I'd say correctness first. |
I think this still happens in 4.0.beta16. Not 100% certain since shaders are new to me, though. My use case is passing a 3d array of ids corresponding to the items in a gridmap as a What I'm doing that seems to work as a workaround is using |
I think this still happens in 4.0.2. With further discussion, I propose that Texture classes should provide an option to choose between UNORM, SINT, UINT, or any other on-GPU format,too. |
I'm wondering why the proposed solution is to set this on the texture? Usually the format is defined in the image, like when you want one red 32 bit float channel you use |
I landed here looking for a way to pass uInts in to shaders with accuracy, I can confirm otonashixav 's work around appears to work (many thanks!), for any others landing here this is what the final code looks like (pseudo code): GD Script:
Shader Code:
Note that there also exists encode_s32() (signed 32bit) and an equivalent floatBitsToInt(). |
Building on previous comments, I found I needed additional elements. This is a complete listing of code for 4.1.2, C++ and GDScript. To recap the issue: We want to store integers in an integer format texture, and in the shader read integers, without any conversion. The solution is for Image and Texture to support standard opengl integer texture formats. The RenderingDevice already does. As a workaround, we can now store integers in a float format texture, then in the shader directly read this as an integer texture. The conversion is done in C++ and is inconsequential, it just reinterprets memory differently or shifts some bits around at worse. There's no recalculation of the significand and exponent, so no precision loss. Creation
//GDExtension C++
uint32_t value = 12345;
int width = 1024, height = 1024;
PackedByteArray pba;
pba.resize(width * height * sizeof(uint32_t));
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
pba.encode_u32((y * width + x) * sizeof(uint32_t), value);
}
}
Ref<Image> new_img = Image::create_from_data(width, height, false, Image::FORMAT_RF, pba); #GDScript
var value: int = 12345
var width: int = 1024
var height: int = 1024
var pba : PackedByteArray = []
pba.resize(width * height * 4) # 4 bytes in 32 bit uint
for x in width:
for y in height:
pba.encode_u32((y*width + x) * 4, value)
var img: Image = Image.create_from_data(width, height, false, Image.FORMAT_RF, pba)
var tex: ImageTexture = ImageTexture.create_from_image(img) Per pixel reading/writingAfter creation, if you wish to write this data on the CPU one pixel at a time, do not use the PackedByteArray. It works, but it is copy on write and tanked my performance as soon as I wrote to it for terrain painting, reducing down to 2-3s per click. // don't do this
PackedByteArray pba = img->get_data()
uint32_t value = pba.decode_u32(index);
value = 54321;
pba.encode_u32(index, value); //copy on write
img->set_data(width, height, false, Image::FORMAT_RF, pba); Instead, since we can't write to the data container directly without triggering a copy, we'll have to use //GDExtension C++
uint32_t i = 12345;
float r = *(float *)&i;
img->set_pixel(0, 0, Color(r, 0.f, 0.f, 1.0f));
i = 0;
r = 0.f;
r = img->get_pixel(0, 0).r;
*(float *)&i = r;
UtilityFunctions::print("i: ", i, ", r: ", r, ", pix: ", img->get_pixel(0, 0));
// Output:
i: 12345, r: 0, pix: (0, 0, 0, 1) #GDScript
var i: int = 12345
var pba: PackedByteArray
pba.resize(4)
pba.encode_u32(0, i)
var r: float = pba.decode_float(0)
var img: Image = Image.create(1, 1, false, Image.FORMAT_RF)
img.set_pixel(0, 0, Color(r, 0., 0., 1.0));
pba.clear()
pba.resize(4)
i = 0
r = 0.0
r = img.get_pixel(0, 0).r;
pba.encode_float(0, r)
i = pba.decode_u32(0)
print("i: ", i, ", r: ", r, ", pix: ", img.get_pixel(0, 0));
# Output:
i: 12345, r: 0, pix: (0, 0, 0, 1) Printing the value of the pixel or color.r is useless as the number is not a valid float. It prints 0, but as shown by ShaderAt least in Godot 4.1.2, the shader doesn't need any special conversion. Comments above show using sampler2D on the uniform to read pixels as floats, then converting with floatBitsToUint. However, we can interpret the FORMAT_RF texture as int directly. uniform usampler2D mysampler;
uint value = texelFetch(mysampler, ivec2(0, 0), 0).r;
uniform usampler2DArray mysamplerarr;
uint value = texelFetch(mysamplerarr, ivec3(0, 0, 0), 0).r; |
Do we have examples of passing various integer types via texture in the shader documentation? |
Using this instead of manipulating PackedByteArray's makes the hack above slightly easier:
|
I have been taking a stab at trying to fix this, but the way image formatting is currently implemented on the rendering device is making this tricky. Data formats are implemented in one giant enum in What would make the most sense would be to allow separate specification of data layout ( As a stopgap, we could add a I would encourage the Godot devs to not let this issue drop. It is still a major usability problem as of 4.3, and while the workarounds above do work in many cases, they add overhead and in some cases may result in a loss of precision. The current method of selecting internal format carries a large amount of tech debt that is only going to get harder to fix and maintain as time passes and more capabilities are added to the engine. The Godot dev team will save themselves a lot of headaches in the long run if they take the time to properly refactor this now. I am happy to contribute to this, but it is more than I can do single-handed. |
My solution has no overhead or precision loss in Terrain3D (C++). We have exposed the conversion functions that eliminate the overhead in GDScript as well. We're reading uints directly in the shader without issue. The only issue I'd like to see resolved is that Image and Texture don't support uint formats, but since the shader already reinterprets the memory, it's more of a convenience factor at this point. |
@TokisanGames Your solution works if your inputs are 32 bit integers. It wouldn't work for 8 or 16-bit integers. There is still a concern of precision loss in converting an 8-bit int to an 8-bit UNORM, as well as the overhead in float conversion, division, and multiplication. |
(BTW @TokisanGames, I believe you can save the extra overhead of copying the |
Thanks for the suggestion, but we don't use PackedByteArray. That was an example for completeness. We use the example where we directly write ints into the Image's float memory with set_pixel rather than construct a buffer. We then let the shader read that texture as int memory. We also expose these functions to GDScript. You're right about 32-bit. FORMAT_RF-FORMAT_RGBAF are the only formats that bypass any conversion. Not only do Image and Texture need int formats, they need more unfiltered input functions to support writing the appropriate type. |
Godot version
v3.4.2.stable.arch_linux
System information
Linux, GLES3
Issue description
This link provides context and a workaround.
Values are expected to be passed as 32 bit integers that directly map to the RGBA values of a texture, but it seems that floating point values are passed and encoded into an integer.
Steps to reproduce
Create an image from data and apply its texture to a shader like so:
img.create_from_data(num_samples, 1, false, Image.FORMAT_RGBA8, rgba) var texture = ImageTexture.new() texture.create_from_image(img, 0) material.set_shader_param("traces", texture)
In the shader, have:
shader_type canvas_item; uniform usampler2D traces;
Minimal reproduction project
No response
The text was updated successfully, but these errors were encountered: