Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDLTest_CompareSurfaces: Decode pixels correctly on big-endian platforms #8317

Merged
merged 1 commit into from
Sep 29, 2023

Conversation

smcv
Copy link
Contributor

@smcv smcv commented Sep 29, 2023

Previously, if acting on a surface with less than 32 bits per pixel, this code was placing the pixel value from the surface in the first few bytes of the Uint32 to be decoded, and unrelated data from a subsequent pixel in the remaining bytes.

Because SDL_GetRGBA takes the bits to be decoded from the least-significant bits of the given value, ignoring the higher-order bits if any, this happened to be correct on little-endian platforms, where the first few bytes store the least-significant bits of an integer.

However, it was incorrect on big-endian, where the first few bytes are the most-significant bits of an integer.

The previous implementation also assumed that unaligned access to a 32-bit quantity is possible, which is not the case on all CPUs (but happens to be true on x86).

These issues were not discovered until now because SDLTest_CompareSurfaces() is only used in testautomation, which until recently was not being run routinely at build-time, because it contained other assumptions that can fail in an autobuilder or CI environment.

Resolves: #8315


I think this would also be worthwhile to cherry-pick to SDL 2 and sdl2-compat, if it's applicable there.

It might even be a good idea to have a slow, simple but fully-correct SDL_ReadSurfacePixel(SDL_Surface *, int x, int y, Uint8 *r, Uint8 *g, Uint8 *b, Uint8 *a) as public API, but I'd prefer to fix the test with this private implementation before promoting it into public API, because I'd prefer not to have API design considerations delay the fix.

Previously, if acting on a surface with less than 32 bits per pixel,
this code was placing the pixel value from the surface in the first
few bytes of the Uint32 to be decoded, and unrelated data from a
subsequent pixel in the remaining bytes.

Because SDL_GetRGBA takes the bits to be decoded from the
least-significant bits of the given value, ignoring the higher-order
bits if any, this happened to be correct on little-endian platforms,
where the first few bytes store the least-significant bits of an
integer.

However, it was incorrect on big-endian, where the first few bytes are
the most-significant bits of an integer.

The previous implementation also assumed that unaligned access to a
32-bit quantity is possible, which is not the case on all CPUs (but
happens to be true on x86).

These issues were not discovered until now because
SDLTest_CompareSurfaces() is only used in testautomation, which until
recently was not being run routinely at build-time, because it contained
other assumptions that can fail in an autobuilder or CI environment.

Resolves: libsdl-org#8315
Signed-off-by: Simon McVittie <[email protected]>
@smcv
Copy link
Contributor Author

smcv commented Sep 29, 2023

SDL_CalculateShapeBitmap() and RecursivelyCalculateShapeTree() open-code a slightly different version of this, in a way that I'm not 100% sure is right for 3-bytes-per-pixel on big-endian or access-requires-alignment platforms either.

It seems like a trap that SDL_GetRGBA() takes an integer first argument that requires this processing to have already been done. While SDL3 is breaking ABI anyway, it might be better to make it take a const void * first argument that points to the first byte of an encoded pixel? But that could cause trouble for the way it's used in the software renderer implementation of SDL_DrawLine() (I don't fully understand how that works).

@slouken slouken merged commit d95d2d7 into libsdl-org:main Sep 29, 2023
@slouken
Copy link
Collaborator

slouken commented Sep 29, 2023

SDL_CalculateShapeBitmap() and RecursivelyCalculateShapeTree() open-code a slightly different version of this, in a way that I'm not 100% sure is right for 3-bytes-per-pixel on big-endian or access-requires-alignment platforms either.

It seems like a trap that SDL_GetRGBA() takes an integer first argument that requires this processing to have already been done. While SDL3 is breaking ABI anyway, it might be better to make it take a const void * first argument that points to the first byte of an encoded pixel? But that could cause trouble for the way it's used in the software renderer implementation of SDL_DrawLine() (I don't fully understand how that works).

This API change might make sense for SDL3. Can you create a separate issue for this?

@smcv
Copy link
Contributor Author

smcv commented Sep 29, 2023

This API change might make sense for SDL3. Can you create a separate issue for this?

#8319, #8320.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[SDL3] Several rendering tests failing on big-endian architectures
2 participants