-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add notes on bit depth for ints and floats #10028
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere. The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0). While Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere. The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0). While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive). Examples of problems this may cause: When setting two floats with time.get_unix_time_from_system() in GDscript (which returns a 64 bit float) a few seconds apart, when compared in the shader they will be equal to each other, and when subtracted from one another they will equal 0.0 due to the 32 bit depth of floats in the shader language. This is not intuitive to debug without documentation as when using the get function in GDscript, they will still subtract correctly so that the second float will be greater than the first float in GDscript, even if they won’t subtract correctly within the shader.most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive). Additionally some functions mention 32 bit floating point numbers (look at packHalf2x16 for example) however I could not find anywhere that states that the default bit depth of int / floats was 32 bit and not 64 bit like in GDscript.
Fixed broken table
AThousandShips
changed the title
Added notes on bit depth for ints and floats
Add notes on bit depth for ints and floats
Oct 2, 2024
skyace65
added
enhancement
area:manual
Issues and PRs related to the Manual/Tutorials section of the documentation
topic:shaders
cherrypick:4.3
labels
Oct 3, 2024
mhilbrunner
approved these changes
Oct 4, 2024
Merged. Thanks and congrats on your first merged contribution! |
mhilbrunner
pushed a commit
that referenced
this pull request
Oct 4, 2024
* Added notes on bit depth for ints and floats Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere. The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0). While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).
Cherry-picked to 4.3 in #10038. |
LESSGOOOO FIRST COMMIT |
jonathansekela
pushed a commit
to jonathansekela/godot-docs
that referenced
this pull request
Nov 19, 2024
* Added notes on bit depth for ints and floats Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere. The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0). While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area:manual
Issues and PRs related to the Manual/Tutorials section of the documentation
enhancement
topic:shaders
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.
The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).
While Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.
The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).
While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).
Examples of problems this may cause:
When setting two floats with time.get_unix_time_from_system() in GDscript (which returns a 64 bit float) a few seconds apart, when compared in the shader they will be equal to each other, and when subtracted from one another they will equal 0.0 due to the 32 bit depth of floats in the shader language.
This is not intuitive to debug without documentation as when using the get function in GDscript, they will still subtract correctly so that the second float will be greater than the first float in GDscript, even if they won’t subtract correctly within the shader.most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).
Additionally some functions mention 32 bit floating point numbers (look at packHalf2x16 for example) however I could not find anywhere that states that the default bit depth of int / floats was 32 bit and not 64 bit like in GDscript.