Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[rlgl] rlGetShaderBufferSize seems to always return 0 #4154

Merged
merged 1 commit into from
Jul 11, 2024

Conversation

kai-kj
Copy link
Contributor

@kai-kj kai-kj commented Jul 11, 2024

Description

The rlGetShaderBufferSize function seems to always return 0, even after uploading, processing and downloading data with an SSBO (all successfully). Looking at the code, the function is implemented using glGetInteger64v:

// Get SSBO buffer size
unsigned int rlGetShaderBufferSize(unsigned int id) {
    long long size = 0;
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, id);
    glGetInteger64v(GL_SHADER_STORAGE_BUFFER_SIZE, &size);
    return (size > 0)? (unsigned int)size : 0;
}

After looking around a bit, it looks like glGetBufferParameteri64v is probably the correct function to be calling in order to get the size of the SSBO, see below:

// Get SSBO buffer size
unsigned int rlGetShaderBufferSize(unsigned int id) {
    long long size = 0;
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, id);
    glGetBufferParameteri64v(GL_SHADER_STORAGE_BUFFER, GL_BUFFER_SIZE, &size);
    return (size > 0)? (unsigned int)size : 0;
}

With this implementation, the actual size of the buffer is correctly returned. From what I can tell, glGetInteger64v is used to query the maximum value of some property (I'm not exactly sure). Is the current implementation the intended behaviour, or should it actually return the current size of the buffer?

Environment

Windows 11, desktop:

INFO: GL: OpenGL device information:
INFO:     > Vendor:   NVIDIA Corporation
INFO:     > Renderer: NVIDIA GeForce RTX 3060 Laptop GPU/PCIe/SSE2
INFO:     > Version:  4.3.0 NVIDIA 546.30
INFO:     > GLSL:     4.30 NVIDIA via Cg compiler

Code Example

#include <stdio.h>
#include <stdlib.h>

#include "raylib.h"
#include "rlgl.h"

int main(void) {
    InitWindow(800, 600, "pathtracer");

    unsigned int target_size = 100;
    unsigned int buffer = rlLoadShaderBuffer(target_size, NULL, RL_DYNAMIC_COPY);

    unsigned int actual_size = rlGetShaderBufferSize(buffer);
    printf("target size: %u, actual size: %u\n", target_size, actual_size);

    rlUnloadShaderBuffer(buffer);
    CloseWindow();
}

With the glGetInteger64v implementation, the code always outputs target size: 100, actual size: 0, but with the glGetBufferParameteri64v implementation, it outputs target size: 100, actual size: 100.

@raysan5 raysan5 merged commit 8d5374a into raysan5:master Jul 11, 2024
@raysan5
Copy link
Owner

raysan5 commented Jul 11, 2024

@kal39 Good catch! Thanks for the improvement! Actually, it seems previous implementation required glGetInteger64i_v() instead of glGetInteger64v() as per the specs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants