Skip to content

Conversation

@c-r33d
Copy link
Contributor

@c-r33d c-r33d commented Oct 27, 2025

Proposed Changes

1.) Correct all instances where Total is not being calculated properly within our list responses.

Checklist

  • I have added or updated unit tests
  • I have added or updated integration tests (if appropriate)
  • I have added or updated documentation

Testing Instructions

@c-r33d c-r33d requested review from a team as code owners October 27, 2025 15:53
@github-actions github-actions bot added comp:db DB component comp:policy Policy Configuration ( attributes, subject mappings, resource mappings, kas registry) size/m labels Oct 27, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @c-r33d, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue where the pagination Total field in list responses for various policy-related entities was inaccurately reported. The solution involves optimizing the underlying SQL queries to use a window function for precise total row counting, which is then verified by a suite of new, detailed integration tests covering different listing scenarios and filters.

Highlights

  • Pagination Total Count Fix: Refactored SQL queries for all list operations across policy entities (attributes, attribute values, namespaces, resource mapping groups, and resource mappings) to correctly calculate the total count for pagination responses.
  • SQL Query Optimization: Replaced the less efficient WITH counted AS (...) CROSS JOIN counted pattern with the COUNT(*) OVER() window function directly within the main SELECT statement, ensuring accurate total counts in a single pass.
  • Comprehensive Integration Tests: Added new integration tests for ListAttributeValues, ListAttributes, ListResourceMappingGroups, and ListResourceMappings to thoroughly validate the correctness of the pagination Total field under various filtering and state conditions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Count was not quite right, Window function sets it straight, Tests now sing with joy.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue where the Total count in paginated list responses was not being calculated properly when filters were applied. The change from using a CTE with CROSS JOIN to the COUNT(*) OVER() window function is a significant improvement, ensuring the total reflects the filtered dataset. The addition of new integration tests is also a great way to verify this fix and prevent future regressions. I've found one minor issue in a new test case.

@github-actions
Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 184.393749ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 99.718015ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 370.31378ms
Throughput 270.04 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 39.466460382s
Average Latency 393.328053ms
Throughput 126.69 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 27.771354504s
Average Latency 276.650195ms
Throughput 180.04 requests/second

@github-actions
Copy link
Contributor

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 178.406555ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 101.528935ms

Standard Benchmark Metrics Skipped or Failed

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 360.303014ms
Throughput 277.54 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 39.099305333s
Average Latency 389.671149ms
Throughput 127.88 requests/second

NANOTDF Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 28.382373482s
Average Latency 282.835134ms
Throughput 176.17 requests/second

@github-actions
Copy link
Contributor

@github-actions
Copy link
Contributor

Copy link
Contributor

@jakedoublev jakedoublev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥

@c-r33d c-r33d added this pull request to the merge queue Oct 28, 2025
Merged via the queue into main with commit 5c1ec9c Oct 28, 2025
62 of 64 checks passed
@c-r33d c-r33d deleted the fix/DSPX-1788-list-rpcs branch October 28, 2025 13:51
@c-r33d
Copy link
Contributor Author

c-r33d commented Oct 29, 2025

/backport

@c-r33d
Copy link
Contributor Author

c-r33d commented Oct 29, 2025

/backport

opentdf-automation bot pushed a commit that referenced this pull request Oct 29, 2025
### Proposed Changes

1.) Correct all instances where `Total` is not being calculated properly
within our list responses.

### Checklist

- [ ] I have added or updated unit tests
- [ ] I have added or updated integration tests (if appropriate)
- [ ] I have added or updated documentation

### Testing Instructions

(cherry picked from commit 5c1ec9c)
@opentdf-automation
Copy link
Contributor

Successfully created backport PR for release/service/v0.11:

opentdf-automation bot added a commit that referenced this pull request Oct 29, 2025
### Proposed Changes

1.) Correct all instances where `Total` is not being calculated properly
within our list responses.

### Checklist

- [ ] I have added or updated unit tests
- [ ] I have added or updated integration tests (if appropriate)
- [ ] I have added or updated documentation

### Testing Instructions

(cherry picked from commit 5c1ec9c)
jakedoublev pushed a commit that referenced this pull request Oct 29, 2025
…t to release/service/v0.11] (#2843)

# Description
Backport of #2836 to `release/service/v0.11`.

Co-authored-by: opentdf-automation[bot] <149537512+opentdf-automation[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport release/service/v0.11 comp:db DB component comp:policy Policy Configuration ( attributes, subject mappings, resource mappings, kas registry) size/m

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants