-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SDL3] Several rendering tests failing on big-endian architectures #8315
Comments
This is a bit weird. The output .bmp files all look right, except that one has a non-trivial alpha channel - but the comparison says they are different. So I think we have some compensating errors somewhere, with the result of render_testPrimitives
render_testBlit
render_testBlitColor
surface_testBlit
surface_testBlitBlendNone
surface_testBlitColorMod
surface_testBlitColorMod
|
On a big-endian system like this, we expect SDL_PIXELFORMAT_RGBA8888 to have 4 bytes per pixel, encoding R, G, B, A in that order, equivalent to SDL_PIXELFORMAT_RGBA32 (red in the most significant byte of each 4-byte word, and alpha in the least). The surface tests seem like they match that. Meanwhile, the render tests are in SDL_PIXELFORMAT_ARGB8888, which is 4 bytes per pixel, encoding A, R, G, B in that order, equivalent to SDL_PIXELFORMAT_ARGB32 (alpha in the most significant byte and blue in the least). Both of those seem like they match up with the masks shown. |
I think the problem might be that |
No, that's wrong, I think I'm being confused by SDL's terminology for image formats again. |
Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes. Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer. However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer. The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86). These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment. Resolves: libsdl-org#8315 Signed-off-by: Simon McVittie <[email protected]>
It looks like the real problem is that |
That's correct. |
Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes. Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer. However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer. The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86). These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment. Resolves: #8315 Signed-off-by: Simon McVittie <[email protected]>
Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes. Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer. However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer. The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86). These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment. Resolves: #8315 Signed-off-by: Simon McVittie <[email protected]> (cherry picked from commit d95d2d7)
Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes. Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer. However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer. The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86). These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment. Resolves: #8315 Signed-off-by: Simon McVittie <[email protected]> (cherry picked from commit d95d2d7) (cherry picked from commit 6b5eadb)
I opened #8318 to try to clarify this for future contributors. |
Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes. Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer. However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer. The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86). These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment. Resolves: libsdl-org#8315 Signed-off-by: Simon McVittie <[email protected]>
A snapshot of 441a5b7 (plus #8312 applied as a patch) is failing build-time tests on big-endian architectures like s390x and powerpc in Debian experimental, with errors like this:
render_testBlit
,render_testBlitColor
,surface_testBlit
,surface_testBlitBlendNone
,surface_testBlitColorMod
,surface_testBlitAlphaMod
fail similarly. Unfortunately I don't have access to the output bitmap files until/unless I can reproduce this via remote access to a big-endian machine (all I get from the official autobuilders is a text log).This is almost certainly confusion between byte orders (red in least significant byte vs. red in first byte, or similar).
The text was updated successfully, but these errors were encountered: