Skip to content

Implement fuzzing and e2e PoC#3384

Merged
arkid15r merged 7 commits intotest-main-rebase-mergefrom
feature/e2e-backend
Jan 16, 2026
Merged

Implement fuzzing and e2e PoC#3384
arkid15r merged 7 commits intotest-main-rebase-mergefrom
feature/e2e-backend

Conversation

@arkid15r
Copy link
Collaborator

Proposed change

Merge e2e/fuzzing feature branch

Checklist

  • [] Required: I followed the contributing workflow
  • Required: I verified that my code works as intended and resolves the issue as described
  • Required: I ran make check-test locally: all warnings addressed, tests passed
  • I used AI for code, documentation, tests, or communication related to this PR

ahmedxgouda and others added 7 commits January 16, 2026 11:33
* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

Add backend/data/nest.sql.gz
* Add hypothesis to poetry and fuzz test the index endpoint.

* Refactor import statements in algolia_test.py for consistency

* Add the client ip address to the request META.

* Add fuzz testing for Slack event handlers and refactor algolia fuzz testing.

* Refactor fuzz tests for Slack event handlers to improve readability and structure

* Fix the poetry lock file.

* Remove fuzz testing from algolia_search unit tests

* Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models.

* Refactor to improve quality

* Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration

* Update poetry.lock to reflect dependency changes and version updates

* Create a fuzz configuration, update docker file, makefile, and the tests.

* Refactor fuzz configuration by reorganizing imports and cleaning up whitespace

* Update Dockerfile and entrypoint script to use Alpine base image and shell

* Run the server on port 8000 after the tests.

* Create a docker compose file for fuzz testing.

* Add 'graphqler' to custom dictionary

* Load data from nest.json and add graphqler to cspell dict.

* Remove model-bakery dependency from pyproject.toml and update poetry.lock

* Update graphqler command in docker compose and the healthcheck

* Update graphql command to use backend service URL in docker-compose

* Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests

* Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose

* Update fuzz-test-backend command to abort on container exit

* Add fuzz testing workflow and update image build steps

* Add .env file creation step for fuzz tests in CI/CD workflow

* Add Docker Hub login step for fuzz tests in CI/CD workflow

* Refactor for the checks

* Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling

* Fix fuzz tests workflow: rename docker-compose file

* Refactor fuzz-tests job.

* Add environment variables for fuzz tests configuration

* Update fuzz tests environment variables

* Fix poetry lock file

* Sort the custom-dict.

* Update content hash in poetry.lock

* Add docker cache mounts to the backend image

* Add Redis configuration

* refactor yaml

* Add docker cache mounts to graphql file

* Remove unnecessary chmod command for cache directories in Dockerfile

* Fix poetry lock file

* Add cache mounts to backend tests

* Update cache mounts in graphql image

* Update mkdir in graphql image

* Remove duplicates

* Update tests

* Rename docker compose

* Update poetry lock

* Apply sonar

* Migrate to OWASP repo

* Update docker

* Use graphqler maintainer docker image

* Add disable permissions, update docker compose, and update entrypoint

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Update code for e2e

* Add runs-on

* Skip sonar and fix ci/cd

* Apply rabbit suggestion and override entrypoint in ci/cd

* Use env with csrf

* Add timeout

* Remove hypothesis and old test files

* Apply rabbit's suggestions

* Update ci/cd and makefile

* Use digest pinning with graphqler image

* Update dockerfile and fix the typeerror issue

* Apply sonar suggestion

* Apply sonar and rabbit suggestions

* Remove cache from ci/cd

* Use curl instead of wget

* Separate e2e from fuzz

* Update fuzz ci/cd

* Update CI/CD

* Run precommit

* Update code

* Update code

* Update docs, ci/cd, and apply suggestions

* Use digest pinning and parameters in workflow

* Apply sonar suggestions

* Apply rabbit suggestions

* Run migrations for fuzz testing in ci/cd

* Apply rabbit suggestions

* Fix exceptions

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Add backend/data/nest.sql.gz

* Update code

* Automate data loading

* Update dump_data to avoid exceptions

* Update dump_data and automated data_loading

* Update CI/CD

* Update tests

* Add timeout for fuzz tests

* Update timeout for fuzz

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Apply rabbit's suggestions

* Update backend/Makefile

* Update make targets and docker compose

* Add volume for graphql fuzzing results and add upload artifacts in ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update docker compose and makefile

* Apply rabbit's suggestions

* Update dump to match the last nest.json.gz

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
* Add redis cache locally and in ci/cd

* Update env and makefile

* Update make target

* Add cache to fuzz and apply rabbit suggestions

* Update makefile target

* Add cache as dependency to the backend in docker compose

* Update file naming to match the remaining compose projects
* Update volumes

* Update compose folders

Update docker compose, dump file and ci/cd

Update ci/cd
* Migrate to schemathesis and add rest fuzz tests

* Apply cspell

* Update Fuzz Dockerfile context

* Update rest auth for fuzz

* Optimize Project Health Stats query and split rest and graphql tests

* Split rest and graphql tests workflows

* Update ci/cd

* Apply rabbit suggestions

* Update ci/cd

* Apply rabbit's suggestions

* Increase number of examples

* Apply rabbit's suggestions

* Apply pre-commit checks

* Update CI/CD

* Update makefile

* Update CI/CD

* Update CI/CD

* Update ci/cd

* Update ci/cd

* Update CI/CD

* Update settings.base.py and ci/cd

* Update configuration and ci/cd

* Update alphabitical order in env files

* Fix negative indexing

* Add EscapeNullCharactersMiddleware

* Update middleware, schemathesis config and add HTTP BadRequest status code to the rest api docs

* Update rest api schema

* Update tests

* Optimize recentIssues

* Add optimiazations and fixes

* Update tests, ci/cd and apply rabbit suggestions

* Optimize N+1 queries

* Update tests

* Update rest schema and add a name for Fuzz Tests job in CI/CD

* Fix negative indexing

* Update project health metrics filters and pagination

* Update mentorship app, ci/cd, and entrypoint

* Add trigram index to project

* Update nest.dump

* Update entrypoint

* Apply checks

* Add
QueryDepthLimiter

* Add optimizations

* Update tests

* Update CI/CD

* Add fixes

* Apply rabbit's suggestion

* Refactor docker files

* Apply cspell

* Refactor limits

* Update milestone enum and rest api endpoints

* Apply middleware suggestions

* Migrate to strawberry_django.field

* Update REDIS_AUTH_ENABLED default value

* Update queries to use strawberry_django.field

* Apply rabbit suggestions

* Update tests and appply rabbit suggestion

* Fix pagination.limit

* Add optimizations and fixes

* Update code

* Add optimizations

* Add optimizations

* Add optimizations

* Add fixes

* Add milestone index migration

* Update nest.dump

* Add optimizations and fixes

* Update snapshot query

* Update backend tests

* Update project model

* Apply rabbit suggestion

* Apply rabbit suggestions

* Apply rabbit suggestion and update tests

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
@github-actions github-actions bot added docs Improvements or additions to documentation backend frontend docker Pull requests that update Docker code backend-tests makefile ci labels Jan 16, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced API error handling with standardized validation error responses across all endpoints.
    • Improved GraphQL and REST API performance through optimized query execution and database indexing.
  • Bug Fixes

    • Added null character filtering for improved security.
    • Fixed timestamp handling and date serialization in API responses.
  • Improvements

    • Optimized database queries with new indexes for faster data retrieval.
    • Enhanced GraphQL field resolvers for better Django ORM integration.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Large-scale infrastructure expansion introducing end-to-end and fuzz testing capabilities: new GitHub Actions workflows orchestrate test environments, Django settings enable environment-specific configurations, GraphQL/REST resolvers migrate to strawberry_django with performance optimizations, database migrations add indexing, and docker-compose configurations define containerized test stacks alongside expanded Makefile targets and test utilities.

Changes

Cohort / File(s) Summary
CI/CD Workflows & GitHub Actions
.github/workflows/run-ci-cd.yaml, .github/workflows/run-fuzz-tests.yaml, .github/workflows/setup-backend-environment/action.yaml, .github/workflows/update-nest-test-images.yaml
Added new workflow jobs for GraphQL and REST fuzz tests; introduced a reusable backend setup action; added fuzz-backend Docker image build step with cache optimization.
Backend Settings & Configuration
backend/settings/base.py, backend/settings/e2e.py, backend/settings/fuzz.py, backend/settings/graphql.py
Introduced E2E and Fuzz configuration classes with environment flags; added BlockNullCharactersMiddleware; enabled GraphQL query depth limiting and Django ORM optimizer extension; configured Redis authentication support.
Backend Environment & Middleware
backend/apps/common/middlewares/__init__.py, backend/apps/common/middlewares/block_null_characters.py, backend/apps/common/utils.py
Added middleware to block null characters in requests; extended IP detection to handle E2E environments.
REST API Layer Enhancement
backend/apps/api/rest/v0/__init__.py, backend/apps/api/rest/v0/*.py (chapter, committee, common, event, issue, member, milestone, organization, project, release, repository, snapshot, sponsor)
Introduced global ValidationError exception handler returning 400 responses; added ValidationErrorSchema to common; extended response mappings in all endpoint modules to include BAD_REQUEST with validation error schema.
GraphQL Nodes (Strawberry-Django Migration)
backend/apps/github/api/internal/nodes/issue.py, .../milestone.py, .../organization.py, .../pull_request.py, .../release.py, .../repository.py, .../user.py, backend/apps/owasp/api/internal/nodes/board_of_directors.py, .../chapter.py, .../committee.py, .../common.py, .../member_snapshot.py, .../project.py, .../project_health_metrics.py, .../project_health_stats.py, .../repository_contributor.py, .../snapshot.py
Migrated 20+ GraphQL node classes to use @strawberry_django.field with explicit root parameters; added select_related and prefetch_related optimizations; converted resolver methods to root-based field access for improved query efficiency.
GraphQL Queries (Strawberry-Django Migration & Performance)
backend/apps/github/api/internal/queries/issue.py, .../milestone.py, .../pull_request.py, .../release.py, .../repository.py, .../user.py, backend/apps/owasp/api/internal/queries/board_of_directors.py, .../chapter.py, .../committee.py, .../event.py, .../member_snapshot.py, .../post.py, .../project.py, .../project_health_metrics.py, .../snapshot.py, .../sponsor.py
Introduced MAX_LIMIT constants for pagination safety; replaced @strawberry.field with @strawberry_django.field; replaced Subquery/OuterRef with Window/Rank functions for distinct queries; added limit clamping and validation.
Backend Data Management
backend/apps/common/management/commands/dump_data.py, backend/apps/common/management/commands/load_data.py
Added new dump_data management command with email masking and temp DB handling via pg_dump; removed deprecated load_data command.
Backend ORM Models & Filters
backend/apps/github/models/milestone.py, backend/apps/github/models/organization.py, backend/apps/github/models/repository.py, backend/apps/github/models/repository_contributor.py, backend/apps/owasp/models/chapter.py, .../committee.py, .../common.py, .../mixins/committee.py, .../project.py, .../project_health_metrics.py, .../snapshot.py, backend/apps/owasp/api/internal/filters/project_health_metrics.py
Added database indices on created_at/updated_at fields; optimized query patterns with select_related/prefetch_related; converted entity_leaders to cached_property returning list; added null-safety guards; enhanced project health metrics stats calculation.
Database Migrations
backend/apps/github/migrations/0041_milestone_github_milestone_created_at_and_more.py, backend/apps/owasp/migrations/0070_snapshot_owasp_snapshot_created_at_idx_and_more.py, .../0071_trigram_extension.py, .../0072_project_project_name_gin_idx_and_more.py
Added indices on milestone created_at/updated_at; added snapshot created_at/key-status composite indices; added trigram extension; added GinIndex on project name and leaders_raw fields for full-text search.
Docker Infrastructure
docker/backend/Dockerfile, docker/backend/Dockerfile.fuzz, docker/backend/Dockerfile.local, docker/backend/Dockerfile.test, docker-compose/e2e/compose.yaml, docker-compose/fuzz/compose.yaml, docker-compose/local/compose.yaml, docker-compose/production/compose.yaml, docker-compose/staging/compose.yaml
Removed CMD from base Dockerfile; added Dockerfile.fuzz for fuzzing environment; updated Dockerfile.local PostgreSQL client version; added comprehensive e2e and fuzz docker-compose configurations with services, health checks, and volumes; renamed local compose volumes to -e2e suffix; added entrypoint directives to production/staging.
Backend Entrypoints & Makefiles
backend/entrypoint.fuzz.sh, backend/Makefile, frontend/Makefile
Added fuzz entrypoint script with CSRF/Schemathesis setup; extended backend Makefile with 16+ new targets for e2e/fuzz environments; updated frontend Makefile to use docker-compose for e2e tests.
Backend Test Suite
backend/tests/apps/common/graphql_node_base_test.py, backend/tests/apps/common/management/commands/dump_data_test.py, backend/tests/apps/common/management/commands/load_data_test.py, backend/tests/apps/github/api/internal/nodes/issue_test.py, .../milestone_test.py, .../organization_test.py, .../pull_request_test.py, .../release_test.py, .../repository_test.py, .../user_test.py, backend/tests/apps/github/api/internal/queries/issue_test.py, .../milestone_test.py, .../pull_request_test.py, .../release_test.py, .../repository_test.py, backend/tests/apps/github/models/repository_contributor_test.py, .../repository_test.py, backend/tests/apps/owasp/api/internal/nodes/board_of_directors_test.py, .../chapter_test.py, .../committee_test.py, .../common_test.py, .../member_snapshot_test.py, .../project_health_metrics_test.py, .../project_health_stats_test.py, .../project_test.py, backend/tests/apps/owasp/api/internal/queries/member_snapshot_test.py, backend/tests/apps/owasp/models/chapter_test.py, .../committee_test.py, backend/tests/fuzz/graphql_test.py, .../rest_test.py
Added GraphQLNodeBaseTest base class for field introspection; updated 40+ GraphQL node/query tests to use field-based resolver invocation; added fuzz test files using Schemathesis; replaced Subquery mocking with annotate/Window-based approaches; added dump_data test; removed load_data test.
Backend Event API
backend/apps/api/rest/v0/event.py
Changed event date fields from datetime to str type; added ISO format resolver methods; added BAD_REQUEST validation error response mapping.
Mentorship API
backend/apps/mentorship/api/internal/queries/mentorship.py, .../module.py, .../program.py
Changed error handling from raising ObjectDoesNotExist to returning None with logger warnings for missing resources.
OWASP API Permissions
backend/apps/owasp/api/internal/permissions/project_health_metrics.py, backend/apps/owasp/api/internal/views/permissions.py
Added environment-based bypass for dashboard permission checks in E2E/Fuzz environments.
Environment Configuration Files
backend/.env.e2e.example, backend/.env.fuzz.example, backend/.env.example, frontend/.env.e2e.example
Added E2E and Fuzz environment templates with Django/database/cache/API settings; added DJANGO_REDIS_AUTH_ENABLED to base example; added frontend E2E template.
Project Configuration & Documentation
.gitignore, CONTRIBUTING.md, backend/pyproject.toml, cspell/custom-dict.txt
Updated .gitignore to include fuzzing results directory and re-export env examples; expanded CONTRIBUTING with E2E and Fuzz testing sections; added Ruff ignore for dump_data.py security rules; expanded spell check dictionary.
Frontend Application
frontend/src/server/queries/homeQueries.ts
Removed state: "all" argument from recentMilestones GraphQL query.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • PR #3122 — Introduces fuzz testing infrastructure (overlapping workflows, fuzz-backend Dockerfile, entrypoint, and test files).
  • PR #3041 — E2E/fuzz test orchestration changes (overlapping CI workflows, docker-compose configurations, and environment-specific settings).
  • PR #1509 — Strawberry/Strawberry-Django GraphQL migration (related to the widespread converter pattern of GraphQL resolvers from @strawberry.field to @strawberry_django.field across 20+ node classes and queries).

Suggested labels

backend, ci, docker, testing, backend-tests, frontend-tests, graphql, infrastructure

Suggested reviewers

  • kasya
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Description check ❓ Inconclusive The description is very generic and vague, using only 'Merge e2e/fuzzing feature branch' without conveying meaningful information about what the changeset accomplishes. Provide a more detailed description explaining the key features being added, such as e2e testing infrastructure, fuzzing capabilities, Docker setup, and CI/CD integrations.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Implement fuzzing and e2e PoC' clearly describes the main changes in the changeset, which focuses on adding end-to-end testing and fuzzing infrastructure.
Docstring Coverage ✅ Passed Docstring coverage is 94.44% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/e2e-backend

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link

@arkid15r arkid15r changed the base branch from main to test-main-rebase-merge January 16, 2026 20:28
@arkid15r arkid15r marked this pull request as ready for review January 16, 2026 20:28
@arkid15r arkid15r requested a review from kasya as a code owner January 16, 2026 20:28
@arkid15r arkid15r merged commit e10ddd3 into test-main-rebase-merge Jan 16, 2026
54 of 58 checks passed
@arkid15r arkid15r deleted the feature/e2e-backend branch January 16, 2026 20:29
@arkid15r arkid15r restored the feature/e2e-backend branch January 16, 2026 20:29
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
backend/apps/owasp/models/chapter.py (1)

96-105: Add maxsize limit to @lru_cache decorator or implement cache invalidation.

The @lru_cache decorator on active_chapters_count() is unbounded (no maxsize parameter), which can cause memory accumulation. Additionally, changes to chapter data (new chapters, status changes, coordinates) won't be reflected until process restart since there's no cache invalidation logic.

Similar unbounded @lru_cache patterns exist in active_projects_count(), active_committees_count(), open_issues_count(), and functions in slack/utils.py. These should be addressed consistently across the codebase.

Choose one approach:

  1. Add maxsize parameter: @lru_cache(maxsize=128) (or similar limit)
  2. Implement cache invalidation using Django signals when chapters are modified
  3. Use Django's cache framework with TTL instead of lru_cache
backend/apps/owasp/api/internal/queries/project.py (1)

93-104: Critical bug: Empty string matches all projects.

If github_user.name is None or empty, github_user.name or "" evaluates to "". The condition Q(leaders_raw__icontains="") will match every project with a non-null leaders_raw field, since an empty string is contained in any string.

This causes the method to incorrectly return True for users who are not project leaders.

🐛 Proposed fix
     def is_project_leader(self, info: strawberry.Info, login: str) -> bool:
         """Check if a GitHub login or name is listed as a project leader."""
         try:
             github_user = GithubUser.objects.get(login=login)
         except GithubUser.DoesNotExist:
             return False

-        return Project.objects.filter(
-            Q(leaders_raw__icontains=github_user.login)
-            | Q(leaders_raw__icontains=(github_user.name or ""))
-        ).exists()
+        q_filter = Q(leaders_raw__icontains=github_user.login)
+        if github_user.name:
+            q_filter |= Q(leaders_raw__icontains=github_user.name)
+        return Project.objects.filter(q_filter).exists()

Additionally, the info parameter appears unused—consider removing it if not needed.

backend/apps/api/rest/v0/committee.py (1)

81-85: Function name and docstring mismatch.

The function is named get_chapter with docstring """Get chapter.""" but it handles committee retrieval. This appears to be a copy-paste error from the chapter endpoint.

Proposed fix
 `@decorate_view`(cache_response())
-def get_chapter(
+def get_committee(
     request: HttpRequest,
     committee_id: str = Path(example="project"),
 ) -> CommitteeDetail | CommitteeError:
-    """Get chapter."""
+    """Get committee."""
     if committee := CommitteeModel.active_committees.filter(
docker-compose/local/compose.yaml (1)

26-26: Bug: Environment variable copy-paste error causes incorrect Redis password.

DJANGO_REDIS_PASSWORD defaults to ${DJANGO_REDIS_HOST:-nest-cache} instead of the password variable. This will cause Redis authentication failures.

🐛 Proposed fix
-      DJANGO_REDIS_PASSWORD: ${DJANGO_REDIS_HOST:-nest-cache-password}
+      DJANGO_REDIS_PASSWORD: ${DJANGO_REDIS_PASSWORD:-nest-cache-password}
backend/apps/owasp/models/project_health_metrics.py (1)

191-245: Month-only grouping merges data across years, distorting the trend.

ExtractMonth + values().distinct() collapses same-month values across different years. When the 365-day window spans the same month in different years (e.g., Jan 2025 and Jan 2026), both years' data aggregates together under a single month entry, blending the average scores.

Group by both year and month instead. Add ExtractYear to imports and group by ("year", "month"):

Suggested fix
-        monthly_overall_metrics = (
-            ProjectHealthMetrics.objects.annotate(month=ExtractMonth("nest_created_at"))
-            .filter(
-                nest_created_at__gte=timezone.now() - timezone.timedelta(days=365)
-            )  # Last year data
-            .order_by("month")
-            .values("month")
-            .distinct()
-            .annotate(
-                score=models.Avg("score"),
-            )
-        )
+        monthly_overall_metrics = (
+            ProjectHealthMetrics.objects
+            .filter(nest_created_at__gte=timezone.now() - timezone.timedelta(days=365))
+            .annotate(
+                year=ExtractYear("nest_created_at"),
+                month=ExtractMonth("nest_created_at"),
+            )
+            .values("year", "month")
+            .order_by("year", "month")
+            .annotate(score=Coalesce(models.Avg("score"), 0.0))
+        )
@@
-            months.append(entry["month"])
+            months.append(entry["year"] * 100 + entry["month"])

Note: The proposed change formats months as YYYYMM (e.g., 202501 for January 2025). If the client expects only month numbers (1–12), add a parallel years list or format as year-month strings instead.

🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 50-52: Remove the duplicate .gitignore entry for
".cursor/rules/snyk_rules.mdc" so only a single line containing that path
remains in the file; locate both occurrences of the string
".cursor/rules/snyk_rules.mdc" in .gitignore and delete the redundant one,
leaving the original entry intact.

In `@backend/.env.example`:
- Around line 16-18: The example .env currently sets
DJANGO_REDIS_AUTH_ENABLED=True while leaving DJANGO_REDIS_PASSWORD=None which
will break connections for users copying it; change the example to either set
DJANGO_REDIS_AUTH_ENABLED=False by default or provide a clear placeholder for
DJANGO_REDIS_PASSWORD (e.g., an explicit placeholder string like
"your_redis_password_here") and update the comment/README to instruct users to
set a real password when enabling auth so the variables
DJANGO_REDIS_AUTH_ENABLED and DJANGO_REDIS_PASSWORD remain consistent.

In `@backend/.env.fuzz.example`:
- Around line 1-24: Reorder the environment variables in this example to satisfy
dotenv-linter UnorderedKey warnings (e.g., sort keys alphabetically such that
DJANGO_ALGOLIA_APPLICATION_ID... through GITHUB_TOKEN are in proper order) and
remove the unnecessary double quotes around DJANGO_PUBLIC_IP_ADDRESS so it reads
DJANGO_PUBLIC_IP_ADDRESS=127.0.0.1; ensure all other entries use plain KEY=VALUE
syntax (keep None for unset values and preserve existing values like
DJANGO_DB_HOST=db, DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password, etc.).

In `@backend/apps/common/management/commands/dump_data.py`:
- Around line 102-106: The return type of _remove_emails is incorrect: it builds
and returns psycopg2.sql.Composed objects, not strings; update the type hint on
def _remove_emails(self, tables: list[str]) -> list[str] to reflect the actual
type (e.g., -> list[sql.Composed] or Sequence[sql.Composed]) and ensure the
psycopg2.sql module is imported/aliased as sql in the file so the annotation
resolves correctly.
- Around line 87-93: The cleanup finally block currently catches
CalledProcessError which `_execute_sql` will never raise (it uses
psycopg2.connect); change the exception handler to catch psycopg2 exceptions
(e.g., psycopg2.Error or psycopg2.OperationalError) when attempting to drop
`temp_db`, and ensure `psycopg2` (or the specific Error class) is imported so
the except clause properly handles DB errors instead of `CalledProcessError`.

In `@backend/apps/common/middlewares/block_null_characters.py`:
- Around line 20-40: In the __call__ middleware method the body null-byte check
is gated by content_length (variable content_length from request.META), which
lets chunked requests bypass it; remove the CONTENT_LENGTH conditional and
always inspect request.body directly (check for b"\x00" and b"\\u0000") and, if
found, log the same message and return the JsonResponse/HTTPStatus.BAD_REQUEST
response as done for URL/parameter nulls so chunked bodies are properly blocked.

In `@backend/apps/github/api/internal/nodes/release.py`:
- Around line 34-43: Remove the prefetch_related hint from the
`@strawberry_django.field` decorator on project_name (leave
select_related=["repository"]) because project_name returns a computed string,
not a relationship; if you need query optimization, perform any prefetching
inside the resolver (e.g., fetch the Release with repository__project when
constructing root before calling project_name) or rely on
DjangoOptimizerExtension to optimize automatically, and keep the resolver
signature project_name(self, root: Release) -> str | None and its use of
OWASP_ORGANIZATION_NAME unchanged.

In `@backend/apps/github/api/internal/queries/issue.py`:
- Around line 18-38: The select_related list in the strawberry_django.field
decorator includes reverse relations through user_badges which cannot be used
with select_related; move the two entries "author__user_badges__badge" and
"milestone__author__user_badges__badge" out of the select_related argument and
add them to the prefetch_related list instead so that the decorator's
select_related only contains forward FK/OneToOne paths and prefetch_related
handles the reverse user_badges relations.
- Around line 72-83: The current pattern annotating with Window/Rank and then
filtering (queryset.annotate(... rank=Window(...)).filter(rank=1)) is
unreliable; change it to a subquery pattern: build an inner_qs by annotating the
same queryset with rank using Window(expression=Rank(),
partition_by=[F("author_id")], order_by=F("created_at").desc()), .filter(rank=1)
and .values('pk'), then set the outer queryset to
filter(pk__in=Subquery(inner_qs)).order_by("-created_at") so the rank filter
happens on the subquery (use Subquery, F, Window, Rank and pk__in to locate
symbols).

In `@backend/apps/github/api/internal/queries/release.py`:
- Around line 61-74: The window partition currently uses only F("author_id") and
Rank(), which groups releases across repositories and can return ties; update
the annotate call in this queryset to partition by both F("author_id") and
F("repository_id") and make the ordering deterministic by using a secondary
tie-breaker (e.g. order_by=(F("published_at").desc(), F("id").desc()));
optionally replace Rank() with RowNumber() to guarantee a single row per
partition and then filter row_number=1 instead of rank=1 so unique releases are
selected per author+repository deterministically.

In `@backend/apps/github/api/internal/queries/repository.py`:
- Around line 40-42: The decorator on the Strawberry field is using
select_related with a reverse relation path ("owner__user_badges__badge") which
will raise a Django FieldError; update the field optimization by removing
"owner__user_badges__badge" from the select_related list and instead use
prefetch_related for that reverse relation (or perform the prefetch inside the
resolver) so that owner and its forward OneToOne/ForeignKey targets remain in
select_related while owner__user_badges__badge is fetched via prefetch_related;
locate the strawberry_django.field decorator and adjust the query hints
accordingly.

In `@backend/apps/github/api/internal/queries/user.py`:
- Line 40: The decorator on the field currently uses
select_related=["owasp_profile", "user_badges__badge"], but user_badges__badge
traverses a reverse relation and must use prefetch_related (or be left to
DjangoOptimizerExtension); remove or refactor the decorator hints: either delete
the select_related argument on the `@strawberry_django.field` decorator (letting
DjangoOptimizerExtension handle optimizations) or replace the decorator
optimization with explicit query optimization inside the resolver (use
select_related("owasp_profile") and prefetch_related("user_badges__badge")
within the resolver/query method), referencing the decorated field where
`@strawberry_django.field`(...) appears to locate the change.

In `@backend/apps/owasp/api/internal/queries/member_snapshot.py`:
- Around line 48-51: The decorator on the Strawberry field currently lists
"github_user__user_badges__badge" under select_related but it is a reverse FK
(User.user_badges) and must be moved to prefetch_related; update the
`@strawberry_django.field` call so select_related only contains
"github_user__owasp_profile" and add "github_user__user_badges__badge" to the
prefetch_related list for the same field (the method in member_snapshot.py that
has this decorator).

In `@backend/apps/owasp/api/internal/queries/project.py`:
- Around line 37-49: The select_related list in the `@strawberry_django.field`
decorator incorrectly includes the reverse FK path
"owasp_repository__owner__user_badges__badge"; move that entry out of
select_related and add it to the prefetch_related list instead (so the
decorator's select_related only contains forward FK/OneToOne paths like
"owasp_repository__organization" and prefetch_related includes
"owasp_repository__owner__user_badges__badge" alongside existing entries such as
"entity_leaders__member"), ensuring Django uses prefetch for the reverse
user_badges relationship.

In `@backend/apps/owasp/models/project_health_metrics.py`:
- Around line 149-152: The cached_property project_requirements currently calls
ProjectHealthRequirements.objects.get(level=self.project.level) which will raise
DoesNotExist instead of returning None; change it to catch the model's
DoesNotExist (ProjectHealthRequirements.DoesNotExist) around the .get call and
return None when missing so the method's Optional[ProjectHealthRequirements]
contract is honored and downstream guards work as intended.

In `@backend/tests/apps/github/api/internal/queries/issue_test.py`:
- Around line 64-76: The test test_recent_issues_limit should also assert that
the queryset slice was applied; after calling
IssueQuery().recent_issues(limit=1) add an assertion that
mock_queryset.__getitem__ was called with the expected slice (e.g.,
mock_queryset.__getitem__.assert_called_once_with(slice(None, 1, None))) so the
limit parameter is actually verified on the queryset returned by
Issue.objects.order_by.

In `@backend/tests/fuzz/graphql_test.py`:
- Around line 10-25: The module currently raises ValueError at import when
BASE_URL or CSRF_TOKEN are missing; replace that with a module-level pytest.skip
to avoid breaking test collection: import pytest, check BASE_URL and CSRF_TOKEN
(the same checks using BASE_URL := os.getenv("BASE_URL") and CSRF_TOKEN :=
os.getenv("CSRF_TOKEN")), and if either is missing call pytest.skip("BASE_URL
and CSRF_TOKEN must be set in the environment.", allow_module_level=True); then
proceed to define HEADERS and schema (schemathesis.graphql.from_url) as before
so tests are only collected when env vars exist.

In `@docker-compose/e2e/compose.yaml`:
- Around line 22-30: Change the healthcheck to call a dedicated readiness
endpoint rather than the admin URL: add a lightweight Django view (e.g.,
health_view or health_check) that returns HTTP 200 OK and map it to a simple URL
path like /health/ in your URLconf (e.g., include path('health/', health_view)
in urls.py), then update the docker-compose healthcheck test to hit
http://backend:9000/health/ (using the same wget/curl pattern used today) and
keep the existing interval/retries/timeouts.

In `@docker/backend/Dockerfile.test`:
- Around line 37-39: The Dockerfile currently copies test files piecemeal (COPY
tests/apps tests/apps and COPY tests/__init__.py and COPY tests/conftest.py)
which omits tests/fuzz/; replace those granular COPY lines with a single
recursive copy that copies the entire tests directory (e.g., using COPY tests
tests) so graphql_test.py, rest_test.py and their __init__.py under tests/fuzz
are included; update/remove the three existing COPY lines and verify subsequent
steps reference the same target path.
🧹 Nitpick comments (48)
backend/apps/owasp/api/internal/filters/project_health_metrics.py (1)

20-22: Redundant enum conversion.

Since value is now typed as ProjectLevel, Strawberry will handle the enum conversion automatically. The ProjectLevel(value) call on line 22 is redundant—you can use value directly.

Suggested simplification
     def level(self, value: ProjectLevel, prefix: str):
         """Filter by project level."""
-        return Q(project__level=ProjectLevel(value)) if value else Q()
+        return Q(project__level=value) if value else Q()
backend/apps/mentorship/api/internal/queries/program.py (1)

13-13: Unused constant PAGE_SIZE.

PAGE_SIZE = 25 is defined but not used in this file. The my_programs method uses limit: int = 24 as its default instead. Consider removing this constant or using it consistently.

backend/tests/apps/owasp/models/chapter_test.py (1)

115-124: Test may fail or give false positives due to @lru_cache.

The active_chapters_count method uses @lru_cache, which means the cached result from a previous call persists across test runs in the same process. This test might:

  • Return a stale cached value instead of hitting the mock
  • Cause flaky behavior depending on test execution order

Clear the cache before testing:

🧪 Proposed fix to clear lru_cache
     `@pytest.mark.parametrize`(
         ("value"),
         [
             42,
         ],
     )
     def test_active_chapters_count(self, value):
+        Chapter.active_chapters_count.cache_clear()
         with patch.object(Chapter.objects, "filter") as mock_filter:
             mock_filter.return_value.count.return_value = value
             assert Chapter.active_chapters_count() == value
             mock_filter.assert_called_once_with(
                 is_active=True,
                 latitude__isnull=False,
                 longitude__isnull=False,
                 owasp_repository__is_empty=False,
             )
backend/pyproject.toml (1)

105-108: Prefer inline # noqa: S603,S607 with safety rationale instead of per-file ignores.

The subprocess usage in dump_data.py is safe: the run() call uses a list form (implicit shell=False at line 82), and pg_dump is a standard utility. However, per-file ignores can silently mask future unsafe subprocess calls. Add inline comments like # noqa: S603,S607 — pg_dump is a standard utility in PATH with controlled args on line 82 for clarity and to protect against future changes.

backend/settings/graphql.py (1)

46-50: Consider making max query depth configurable via settings.

Hard‑coding max_depth=5 makes tuning across environments harder (e.g., e2e/fuzz). A settings fallback keeps defaults while allowing overrides.

♻️ Suggested change (configurable max depth)
+from django.conf import settings
 ...
 schema = strawberry.Schema(
     mutation=Mutation,
     query=Query,
-    extensions=[CacheExtension, QueryDepthLimiter(max_depth=5), DjangoOptimizerExtension()],
+    extensions=[
+        CacheExtension,
+        QueryDepthLimiter(max_depth=getattr(settings, "GRAPHQL_MAX_DEPTH", 5)),
+        DjangoOptimizerExtension(),
+    ],
 )
backend/apps/owasp/api/internal/permissions/project_health_metrics.py (1)

12-21: Consider reusing the dashboard permission helper to avoid drift.

This bypass logic now exists in two places. If layering allows, centralize it (e.g., reuse the helper from backend/apps/owasp/api/internal/views/permissions.py) so any future hardening stays consistent.

frontend/.env.e2e.example (1)

1-16: Address dotenv-linter ordering warnings (optional).
Reordering NEXTAUTH_* above NEXT_PUBLIC_* will silence the linter warnings.

♻️ Proposed reorder
+NEXTAUTH_SECRET=<your-nextauth-secret>
+NEXTAUTH_URL=http://localhost:3000/
 NEXT_PUBLIC_API_URL=http://localhost:9000/
 NEXT_PUBLIC_CSRF_URL=http://localhost:9000/csrf/
 NEXT_PUBLIC_ENVIRONMENT=local
 NEXT_PUBLIC_GRAPHQL_URL=http://localhost:9000/graphql/
 NEXT_PUBLIC_GTM_ID=
 NEXT_PUBLIC_IDX_URL=http://localhost:9000/idx/
 NEXT_PUBLIC_IS_PROJECT_HEALTH_ENABLED=true
 NEXT_PUBLIC_RELEASE_VERSION=
 NEXT_PUBLIC_SENTRY_DSN=
 NEXT_SERVER_CSRF_URL=http://localhost:9000/csrf/
 NEXT_SERVER_DISABLE_SSR=false
 NEXT_SERVER_GITHUB_CLIENT_ID=your-github-client-id
 NEXT_SERVER_GITHUB_CLIENT_SECRET=your-github-client-secret
 NEXT_SERVER_GRAPHQL_URL=http://localhost:9000/graphql/
-NEXTAUTH_SECRET=<your-nextauth-secret>
-NEXTAUTH_URL=http://localhost:3000/
backend/tests/fuzz/rest_test.py (1)

12-23: Module-level schema loading may cause import-time failures.

Loading the OpenAPI schema at module level means any import of this file (e.g., during test collection) will block on the HTTP request. If the REST API is unavailable, this will fail during collection rather than at test execution time.

Consider whether this is intentional for your CI setup, or if lazy loading would be preferable.

backend/apps/owasp/api/internal/queries/post.py (1)

16-19: Docstring is slightly outdated.

The docstring says "Return the 5 most recent posts" but the method accepts a configurable limit parameter with a default of 5 and now caps at MAX_LIMIT.

📝 Suggested docstring update
     `@strawberry_django.field`
     def recent_posts(self, limit: int = 5) -> list[PostNode]:
-        """Return the 5 most recent posts."""
+        """Return the most recent posts, up to the specified limit."""
         return Post.recent_posts()[:limit] if (limit := min(limit, MAX_LIMIT)) > 0 else []
backend/apps/github/api/internal/queries/release.py (1)

18-24: Verify author__user_badges__badge is valid for select_related.

If user_badges is M2M or a reverse FK, select_related will raise a FieldError and won’t prefetch. Consider moving it to prefetch_related if that’s the case.

🔧 Suggested adjustment if the relation is M2M/reverse FK
 `@strawberry_django.field`(
     select_related=[
         "author__owasp_profile",
-        "author__user_badges__badge",
         "repository__organization",
     ]
+    ,
+    prefetch_related=["author__user_badges__badge"],
 )
backend/apps/github/api/internal/queries/issue.py (1)

85-85: Consider splitting the walrus operator for clarity.

The expression queryset[:limit] if (limit := min(limit, MAX_LIMIT)) > 0 else [] combines limit clamping and conditional return in one line. While functional, splitting improves readability:

limit = min(limit, MAX_LIMIT)
return list(queryset[:limit]) if limit > 0 else []

Note: Also consider wrapping in list() to materialize the queryset, ensuring the optimizer hints are applied before returning.

backend/tests/apps/github/api/internal/queries/issue_test.py (1)

11-179: Consider adding edge case test for MAX_LIMIT enforcement.

The implementation caps limit at MAX_LIMIT = 1000. Consider adding a test to verify this boundary behavior:

def test_recent_issues_limit_capped_at_max(self, mock_objects, mock_issue):
    """Test that limit is capped at MAX_LIMIT."""
    mock_queryset = MagicMock()
    mock_queryset.order_by.return_value = mock_queryset
    mock_queryset.filter.return_value = mock_queryset
    mock_queryset.__getitem__.return_value = [mock_issue]
    mock_objects.order_by.return_value = mock_queryset

    result = IssueQuery().recent_issues(limit=2000)

    mock_queryset.__getitem__.assert_called_with(slice(None, 1000))
backend/apps/github/api/internal/queries/milestone.py (1)

87-91: Consider extracting the limit calculation for clarity.

The walrus operator with min() in the conditional is functional but dense. Consider separating the limit capping for improved readability.

♻️ Optional refactor for readability
-        return (
-            milestones.order_by("-created_at")[:limit]
-            if (limit := min(limit, MAX_LIMIT)) > 0
-            else []
-        )
+        limit = min(limit, MAX_LIMIT)
+        if limit <= 0:
+            return []
+        return list(milestones.order_by("-created_at")[:limit])
docker-compose/e2e/compose.yaml (2)

48-53: Consider adding error handling for pg_restore.

If the dump file doesn't exist or pg_restore fails, the data-loader will exit silently. Consider adding error checking to fail fast and provide clearer diagnostics.

♻️ Proposed fix with error handling
     command: >
       sh -c '
         echo "Loading data from dump..." &&
+        if [ ! -f /data/nest.dump ]; then
+          echo "Error: /data/nest.dump not found" && exit 1
+        fi &&
-        pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB /data/nest.dump &&
+        pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB /data/nest.dump || exit 1 &&
         echo "Data loading completed."
       '

82-86: Redis healthcheck exposes password in process list.

The redis-cli -a $$REDIS_PASSWORD ping command will expose the password in the process list. While acceptable in a test environment, consider using REDISCLI_AUTH environment variable for production-like setups.

♻️ Alternative using environment variable
     healthcheck:
       interval: 5s
       retries: 5
-      test: [CMD, redis-cli, -a, $$REDIS_PASSWORD, ping]
+      test: >
+        sh -c 'REDISCLI_AUTH=$$REDIS_PASSWORD redis-cli ping'
       timeout: 5s
backend/apps/common/management/commands/dump_data.py (2)

53-55: SQL injection risk in dynamic database name construction.

The temp_db variable is constructed by string interpolation and used directly in SQL statements without parameterization. While DB_NAME comes from Django settings (not user input), this pattern is fragile and could become unsafe if the source changes. Additionally, CREATE DATABASE with TEMPLATE doesn't support parameterized queries in psycopg2, but using sql.Identifier would be safer.

♻️ Suggested safer approach using sql.Identifier
+from psycopg2 import sql as psql
+
 temp_db = f"temp_{DB_NAME}"
 try:
-    self._execute_sql("postgres", [f"CREATE DATABASE {temp_db} TEMPLATE {DB_NAME};"])
+    create_stmt = psql.SQL("CREATE DATABASE {} TEMPLATE {};").format(
+        psql.Identifier(temp_db),
+        psql.Identifier(DB_NAME),
+    )
+    self._execute_sql("postgres", [create_stmt])

Similarly for the DROP statement on line 89.


108-128: Type hint and resource management improvements.

  1. sql_queries parameter accepts both str and sql.Composed objects but is typed as list[str].
  2. The connection should use a context manager to ensure proper cleanup even if exceptions occur between connect() and close().
♻️ Proposed improvements
+from typing import Union
+
 def _execute_sql(
     self,
     dbname: str,
-    sql_queries: list[str],
+    sql_queries: list[Union[str, sql.Composed]],
 ):
-    connection = connect(
+    with connect(
         dbname=dbname,
         user=DB_USER,
         password=DB_PASSWORD,
         host=DB_HOST,
         port=DB_PORT,
-    )
-    connection.autocommit = True
-
-    rows = []
-    with connection.cursor() as cursor:
-        for sql_query in sql_queries:
-            cursor.execute(sql_query)
-            with contextlib.suppress(ProgrammingError):
-                rows.extend(cursor.fetchall())
-    connection.close()
-
-    return rows
+    ) as connection:
+        connection.autocommit = True
+        rows = []
+        with connection.cursor() as cursor:
+            for sql_query in sql_queries:
+                cursor.execute(sql_query)
+                with contextlib.suppress(ProgrammingError):
+                    rows.extend(cursor.fetchall())
+        return rows
backend/tests/apps/common/management/commands/dump_data_test.py (1)

18-92: Test covers happy path well; consider adding error-handling tests.

The test thoroughly verifies the main flow: temp DB creation, email masking, pg_dump invocation, and cleanup. However, there's no coverage for:

  1. CalledProcessError from pg_dump (line 84-86 in command)
  2. Cleanup failure scenario (line 88-93 in command)

These edge cases would improve test robustness.

backend/apps/owasp/api/internal/nodes/snapshot.py (2)

28-28: Consider adding prefetch_related for new_chapters.

Unlike other relation fields (new_issues, new_projects, etc.), new_chapters doesn't specify prefetch_related, which could lead to N+1 queries when resolving this field.

♻️ Proposed fix
-    new_chapters: list[ChapterNode] = strawberry_django.field()
+    new_chapters: list[ChapterNode] = strawberry_django.field(prefetch_related=["new_chapters"])

35-53: Use Prefetch objects with custom querysets to preserve optimization when ordering.

Calling .order_by() on a prefetched relation bypasses Django's prefetch cache and triggers additional database queries. The relation should be prefetched with ordering already applied via Prefetch objects:

♻️ Apply ordering during prefetch, not after
from django.db.models import Prefetch

`@strawberry_django.field`(
    prefetch_related=[
        Prefetch("new_issues", queryset=Issue.objects.order_by("-created_at")[:RECENT_ISSUES_LIMIT])
    ]
)
def new_issues(self, root: Snapshot) -> list[IssueNode]:
    """Resolve new issues."""
    return list(root.new_issues.all())

This applies ordering during prefetch, avoiding the re-query that occurs when .order_by() is called on the cached results.

backend/apps/owasp/api/internal/queries/snapshot.py (1)

27-45: Well-structured prefetch optimization and limit capping.

The prefetch_related configuration aligns with the SnapshotNode field resolvers, which should reduce N+1 queries. The limit capping with MAX_LIMIT is a sensible safeguard.

The walrus operator on line 44 is compact but slightly obscures the logic. Consider extracting the limit capping for clarity:

🔧 Optional refactor for readability
     def snapshots(self, limit: int = 12) -> list[SnapshotNode]:
         """Resolve snapshots."""
+        limit = min(limit, MAX_LIMIT)
+        if limit <= 0:
+            return []
         return (
             Snapshot.objects.filter(
                 status=Snapshot.Status.COMPLETED,
             ).order_by(
                 "-created_at",
-            )[:limit]
-            if (limit := min(limit, MAX_LIMIT)) > 0
-            else []
+            )[:limit]
         )
backend/apps/owasp/api/internal/nodes/common.py (1)

14-42: Consider adding type hints for the root parameter.

All resolver methods accept an untyped root parameter. While this works at runtime, adding type hints improves IDE support and static analysis. Since GenericEntityNode is a base class, consider using a Protocol or a common base model type.

For example, in backend/apps/github/api/internal/nodes/repository.py, the pattern uses root: Repository. Here, you could define a minimal protocol:

🔧 Suggested approach
from typing import Protocol, Any

class EntityProtocol(Protocol):
    entity_leaders: Any
    idx_leaders: list[str]
    related_urls: list[str]
    idx_top_contributors: list[dict]
    idx_updated_at: float
    idx_url: str

Then use root: EntityProtocol in the resolver signatures.

backend/tests/apps/owasp/api/internal/nodes/common_test.py (1)

9-56: Missing test coverage for top_contributors resolver.

The production code in common.py includes a top_contributors resolver (lines 29-32), but this test file doesn't cover it. Consider adding a test:

💚 Suggested test
def test_top_contributors_resolver(self):
    """Test top_contributors returns contributor nodes from entity."""
    mock_entity = Mock()
    mock_entity.idx_top_contributors = [
        {"avatar_url": "url1", "contributions_count": 10, "id": "1", "login": "user1", "name": "User 1"},
        {"avatar_url": "url2", "contributions_count": 5, "id": "2", "login": "user2", "name": "User 2"},
    ]

    result = GenericEntityNode.top_contributors(None, mock_entity)

    assert len(result) == 2
    assert result[0].login == "user1"
    assert result[1].login == "user2"
backend/tests/apps/owasp/api/internal/nodes/committee_test.py (1)

19-20: Consider adding defensive null-check for field lookup.

_get_field_by_name returns None if the field doesn't exist. Accessing .base_resolver on None would cause an AttributeError, which could make debugging test failures harder. Consider adding an assertion:

🔧 Suggested improvement
field = self._get_field_by_name("contributors_count", CommitteeNode)
assert field is not None, "Field 'contributors_count' not found on CommitteeNode"
result = field.base_resolver.wrapped_func(None, mock_committee)

This applies to all similar field lookups in this file. Alternatively, update _get_field_by_name in the base class to raise a descriptive error when the field isn't found.

backend/tests/apps/owasp/api/internal/nodes/member_snapshot_test.py (1)

10-28: Remove unused mock setup in test_node_fields.

The mock_snapshot object (lines 11-14) is created but never used in the test. The test only validates field names in the node definition. Consider removing the dead code:

🔧 Suggested fix
     def test_node_fields(self):
-        mock_snapshot = Mock()
-        mock_snapshot.start_at = "2025-01-01"
-        mock_snapshot.end_at = "2025-10-01"
-        mock_snapshot.contribution_heatmap_data = {"2025-01-15": 5}
-
         node = MemberSnapshotNode.__strawberry_definition__
 
         field_names = {field.name for field in node.fields}
backend/tests/apps/owasp/api/internal/nodes/board_of_directors_test.py (1)

12-25: Consider using issubset for consistency with other node tests.

This test uses strict equality (==) on line 25, while chapter_test.py and project_test.py use issubset. Strict equality is more brittle—adding new fields to BoardOfDirectorsNode will break this test.

Suggested change for consistency
-        assert field_names == expected_field_names
+        assert expected_field_names.issubset(field_names)
backend/apps/github/api/internal/nodes/user.py (1)

30-33: Note: prefetch_related hint may not be utilized.

The prefetch_related=["user_badges"] hint tells strawberry_django to prefetch the relation, but calling .filter().count() on line 33 creates a new QuerySet that executes a separate database query regardless of prefetching.

This is functionally correct but the prefetch hint provides no benefit here. If badge_count is frequently accessed alongside badges, consider computing it in Python from the prefetched data, or simply remove the hint to avoid confusion.

backend/apps/github/api/internal/nodes/issue.py (2)

48-52: Prefetch hint is bypassed by the filtered related manager.
root.pull_requests.filter(...).exists() will re-query per issue even if prefetching, so the prefetch hint doesn’t help. Consider using the prefetched collection to avoid the extra query.

♻️ Possible adjustment to use prefetched data
 `@strawberry_django.field`(prefetch_related=["pull_requests"])
 def is_merged(self, root: Issue) -> bool:
     """Return True if this issue has at least one merged pull request."""
-    return root.pull_requests.filter(state="closed", merged_at__isnull=False).exists()
+    return any(
+        pr.state == "closed" and pr.merged_at is not None
+        for pr in root.pull_requests.all()
+    )

53-60: Prefetch cache is likely skipped by select_related/order_by.
Calling select_related(...).order_by(...) on the related manager will bypass prefetched results, causing per-issue queries. Consider a Prefetch with an ordered queryset or sort in-memory using the prefetched participant_interests__user data.

backend/apps/github/api/internal/nodes/repository.py (2)

44-50: Prefetch hint may cause schema issues for relationship fields.

Based on learnings, prefetch_related in @strawberry_django.field decorators should not be used for fields that return objects through Django relationships (like issues returning IssueNode list). These hints can be misinterpreted by Strawberry and expose unwanted fields in the schema. Consider removing the hint and letting DjangoOptimizerExtension optimize automatically, or apply the prefetch inside the resolver method.

♻️ Suggested refactor
-    `@strawberry_django.field`(prefetch_related=["issues"])
+    `@strawberry_django.field`
     def issues(self, root: Repository) -> list[IssueNode]:
         """Resolve recent issues."""
         # TODO(arkid15r): rename this to recent_issues.
-        return root.issues.order_by("-created_at")[:RECENT_ISSUES_LIMIT]
+        return list(root.issues.order_by("-created_at")[:RECENT_ISSUES_LIMIT])

62-78: Same prefetch hint concern applies to these relationship fields.

The prefetch_related hints on project, recent_milestones, and releases fields may also cause schema issues per the same learnings about Strawberry Django's handling of relationship fields. Consider the same approach of removing decorator hints and letting the optimizer handle it, or applying prefetch inside resolvers.

backend/apps/github/api/internal/queries/repository.py (1)

59-66: Consider simplifying the limit validation for readability.

The walrus operator inside the ternary expression works but is harder to read. A straightforward if-else would be clearer.

♻️ Suggested refactor
-        return (
-            (
-                Repository.objects.filter(
-                    organization__login__iexact=organization,
-                ).order_by("-stars_count")[:limit]
-            )
-            if (limit := min(limit, MAX_LIMIT)) > 0
-            else []
-        )
+        limit = min(limit, MAX_LIMIT)
+        if limit <= 0:
+            return []
+
+        return Repository.objects.filter(
+            organization__login__iexact=organization,
+        ).order_by("-stars_count")[:limit]
.github/workflows/update-nest-test-images.yaml (1)

78-89: Consider adding type=gha to cache-from for consistency and faster rebuilds.

Other image steps use both GHA and registry caches; adding GHA here can improve incremental build speed.

♻️ Suggested tweak
-          cache-from: type=registry,ref=owasp/nest:test-fuzz-backend-cache
+          cache-from: |
+            type=gha
+            type=registry,ref=owasp/nest:test-fuzz-backend-cache
backend/apps/api/rest/v0/snapshot.py (1)

129-145: Optional: reflect ValidationErrorSchema in the handler return type.

Since Line 133-137 now documents HTTP 400 with ValidationErrorSchema, consider updating the function signature for type-checking clarity.

♻️ Suggested tweak
-def get_snapshot(
+def get_snapshot(
     request: HttpRequest,
     snapshot_id: str = Path(example="2025-02"),
-) -> SnapshotDetail | SnapshotError:
+) -> SnapshotDetail | SnapshotError | ValidationErrorSchema:
backend/apps/api/rest/v0/organization.py (1)

80-96: Optional: reflect ValidationErrorSchema in the handler return type.

Now that Line 84-88 includes HTTP 400 with ValidationErrorSchema, you may want to align the return type annotation for type checkers.

♻️ Suggested tweak
-def get_organization(
+def get_organization(
     request: HttpRequest,
     organization_id: str = Path(example="OWASP"),
-) -> OrganizationDetail | OrganizationError:
+) -> OrganizationDetail | OrganizationError | ValidationErrorSchema:
.github/workflows/run-fuzz-tests.yaml (2)

72-79: Health check endpoint may return 404.

The health check uses wget --spider http://localhost:9500/a but /a appears to be an arbitrary path that likely returns a 404 error. The --spider flag with wget will fail on non-2xx responses by default, potentially causing false negatives.

Consider using a dedicated health endpoint or the API root:

Suggested fix
       - name: Waiting for the backend to be ready
         run: |
           timeout 5m bash -c '
-            until wget --spider http://localhost:9500/a; do
+            until wget --spider http://localhost:9500/api/v0/; do
               echo "Waiting for backend..."
               sleep 5
             done
           '
           echo "Backend is up!"

59-70: Port mapping redundant with --network host.

When using --network host, the -p 9500:9500 port mapping is ignored since the container shares the host's network namespace directly.

Remove redundant port mapping
          docker run -d --rm --name fuzz-nest-backend \
            --env-file backend/.env.fuzz.example \
            --network host \
            -e DJANGO_DB_HOST=localhost \
            -e DJANGO_REDIS_AUTH_ENABLED=False \
            -e DJANGO_REDIS_HOST=localhost \
-           -p 9500:9500 \
            owasp/nest:test-backend-latest \
docker/backend/Dockerfile.fuzz (2)

26-29: Redundant chmod +x command.

Line 26 already sets --chmod=755 on entrypoint.sh, making the chmod +x on line 29 unnecessary.

Remove redundant chmod
 COPY --chown=root:root --chmod=755 ./entrypoint.fuzz.sh ./entrypoint.sh
 COPY --chown=root:root --chmod=755 tests/fuzz tests
 
-RUN chmod +x ./entrypoint.sh
-
 USER owasp

1-1: Unused "builder" stage alias.

The stage is named builder but there's no multi-stage build consuming it. Consider removing the alias for clarity, or rename it if this is intentional scaffolding for future multi-stage builds.

Suggested fix
-FROM python:3.13.7-alpine AS builder
+FROM python:3.13.7-alpine
backend/apps/api/rest/v0/__init__.py (2)

107-114: Consider adding type hints and using HTTPStatus constant.

The exception handler works correctly but could benefit from type hints for clarity and using HTTPStatus.BAD_REQUEST for consistency with endpoint definitions.

Suggested improvement
+from http import HTTPStatus
+from django.http import HttpRequest
+
 `@api.exception_handler`(ValidationError)
-def validation_exception_handler(request, exc):
+def validation_exception_handler(request: HttpRequest, exc: ValidationError):
     """Handle validation exceptions."""
     return api.create_response(
         request,
         {"message": "Invalid request", "errors": exc.errors},
-        status=400,
+        status=HTTPStatus.BAD_REQUEST,
     )

61-82: E2E and Fuzz environment configurations are nearly identical.

The two blocks differ only in the server description string. Consider consolidating to reduce duplication.

Suggested consolidation
-elif settings.IS_E2E_ENVIRONMENT:
-    api_settings_customization = {
-        "auth": None,
-        "servers": [
-            {
-                "description": "E2E",
-                "url": settings.SITE_URL,
-            }
-        ],
-        "throttle": [],
-    }
-elif settings.IS_FUZZ_ENVIRONMENT:
+elif settings.IS_E2E_ENVIRONMENT or settings.IS_FUZZ_ENVIRONMENT:
+    env_name = "E2E" if settings.IS_E2E_ENVIRONMENT else "Fuzz"
     api_settings_customization = {
         "auth": None,
         "servers": [
             {
-                "description": "Fuzz",
+                "description": env_name,
                 "url": settings.SITE_URL,
             }
         ],
         "throttle": [],
     }
backend/tests/apps/owasp/api/internal/nodes/project_health_stats_test.py (1)

36-53: Missing type test for average_score field.

The expected_field_names set (line 21) includes "average_score", but the test_field_types parametrization does not include a test case for it. Given that average_score is typed as float | None (optional), the current scalar assertion logic (field.type is expected_type) may not handle union/optional types correctly.

Consider either:

  1. Adding a test case for average_score with appropriate handling for optional types
  2. Adding a comment explaining why it's intentionally excluded
backend/tests/apps/owasp/api/internal/nodes/project_health_metrics_test.py (1)

46-76: Consider expanding test coverage for all node fields.

The test_meta_configuration expects fields like age_days_requirement, last_pull_request_days_requirement, and owasp_page_last_update_days_requirement (visible in the relevant code snippet from project_health_metrics.py), but the test_field_types parametrization doesn't include type assertions for these fields. Additionally, score appears to be missing from the type tests.

If these fields have straightforward int types, consider adding them:

♻️ Suggested additions to parametrization
 `@pytest.mark.parametrize`(
     ("field_name", "expected_type"),
     [
         ("age_days", int),
+        ("age_days_requirement", int),
         ("created_at", datetime),
         ...
+        ("last_pull_request_days_requirement", int),
         ...
+        ("owasp_page_last_update_days_requirement", int),
         ...
+        ("score", float),  # verify actual type
     ],
 )
.github/workflows/setup-backend-environment/action.yaml (1)

33-44: Consider adding a version comment for clarity.

The pinned commit SHA (263435318d21b8e681c14492fe198d362a7d2c83) corresponds to v6.18.0. Adding a comment like # v6.18.0 alongside the SHA would improve maintainability by making the version immediately apparent without requiring external lookup.

backend/settings/e2e.py (1)

11-24: Consider keeping Base logging for e2e diagnostics.
LOGGING = {} disables handlers entirely, which makes CI failures harder to debug. If this isn’t intentional, drop the override to inherit Base logging.

♻️ Proposed change
-    LOGGING = {}
docker-compose/fuzz/compose.yaml (1)

92-118: Guard fuzzers from racing data load in direct compose runs.
graphql/rest only depend on backend health, so docker compose up could start fuzzing before data-loader completes. Consider gating on data‑loader completion (or using profiles). Please verify your Compose version supports service_completed_successfully.
Based on learnings, the Makefile already sequences data‑loading; this guard helps direct compose usage.

♻️ Example gating on data‑loader completion
   graphql:
     container_name: fuzz-nest-graphql
     build:
       context: ../../backend
       dockerfile: ../docker/backend/Dockerfile.fuzz
     environment:
       BASE_URL: http://backend:9500
       TEST_FILE: graphql_test.py
     depends_on:
       backend:
         condition: service_healthy
+      data-loader:
+        condition: service_completed_successfully
     networks:
       - fuzz-nest-network
@@
   rest:
     container_name: fuzz-nest-rest
     build:
       context: ../../backend
       dockerfile: ../docker/backend/Dockerfile.fuzz
     environment:
       BASE_URL: http://backend:9500
       REST_URL: http://backend:9500/api/v0
       TEST_FILE: rest_test.py
     depends_on:
       backend:
         condition: service_healthy
+      data-loader:
+        condition: service_completed_successfully
     networks:
       - fuzz-nest-network
backend/apps/owasp/models/project.py (1)

160-163: Return type doesn’t match slicing semantics.
super().entity_leaders[:MAX_LEADERS_COUNT] returns a QuerySet, not a list. Consider casting to list(...) or adjust the annotation to a QuerySet to avoid type confusion.

♻️ Possible adjustment
-    def entity_leaders(self) -> list[EntityMember]:
+    def entity_leaders(self) -> list[EntityMember]:
         """Return project leaders."""
-        return super().entity_leaders[:MAX_LEADERS_COUNT]
+        return list(super().entity_leaders[:MAX_LEADERS_COUNT])
backend/apps/owasp/api/internal/nodes/project.py (1)

51-66: Prefetch hint won’t apply with order_by() on the related manager.
root.health_metrics.order_by(...) bypasses prefetched data, so the prefetch_related=["health_metrics"] hint may not prevent N+1s. Consider using a Prefetch queryset with ordering or dropping the hint to avoid a false sense of optimization.

Comment on lines +50 to +52

# Snyk Security Extension - AI Rules (auto-generated)
.cursor/rules/snyk_rules.mdc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate .gitignore entry.

The .cursor/rules/snyk_rules.mdc entry at line 52 duplicates the existing entry at line 6. Remove the duplicate to keep the file clean.

Proposed fix
 
-# Snyk Security Extension - AI Rules (auto-generated)
-.cursor/rules/snyk_rules.mdc
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Snyk Security Extension - AI Rules (auto-generated)
.cursor/rules/snyk_rules.mdc
🤖 Prompt for AI Agents
In @.gitignore around lines 50 - 52, Remove the duplicate .gitignore entry for
".cursor/rules/snyk_rules.mdc" so only a single line containing that path
remains in the file; locate both occurrences of the string
".cursor/rules/snyk_rules.mdc" in .gitignore and delete the redundant one,
leaving the original entry intact.

Comment on lines +16 to 18
DJANGO_REDIS_AUTH_ENABLED=True
DJANGO_REDIS_HOST=None
DJANGO_REDIS_PASSWORD=None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid enabling Redis auth without a password in the example.
As written, a copied .env enables auth but leaves DJANGO_REDIS_PASSWORD=None, which typically causes connection failures. Consider defaulting to False or providing a placeholder password.

💡 Suggested fix
-DJANGO_REDIS_AUTH_ENABLED=True
+DJANGO_REDIS_AUTH_ENABLED=False
 DJANGO_REDIS_HOST=None
 DJANGO_REDIS_PASSWORD=None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DJANGO_REDIS_AUTH_ENABLED=True
DJANGO_REDIS_HOST=None
DJANGO_REDIS_PASSWORD=None
DJANGO_REDIS_AUTH_ENABLED=False
DJANGO_REDIS_HOST=None
DJANGO_REDIS_PASSWORD=None
🤖 Prompt for AI Agents
In `@backend/.env.example` around lines 16 - 18, The example .env currently sets
DJANGO_REDIS_AUTH_ENABLED=True while leaving DJANGO_REDIS_PASSWORD=None which
will break connections for users copying it; change the example to either set
DJANGO_REDIS_AUTH_ENABLED=False by default or provide a clear placeholder for
DJANGO_REDIS_PASSWORD (e.g., an explicit placeholder string like
"your_redis_password_here") and update the comment/README to instruct users to
set a real password when enabling auth so the variables
DJANGO_REDIS_AUTH_ENABLED and DJANGO_REDIS_PASSWORD remain consistent.

Comment on lines +1 to +24
DJANGO_ALGOLIA_APPLICATION_ID=None
DJANGO_ALGOLIA_EXCLUDED_LOCAL_INDEX_NAMES=None
DJANGO_ALGOLIA_WRITE_API_KEY=None
DJANGO_ALLOWED_HOSTS=*
DJANGO_AWS_ACCESS_KEY_ID=None
DJANGO_AWS_SECRET_ACCESS_KEY=None
DJANGO_SETTINGS_MODULE=settings.fuzz
DJANGO_CONFIGURATION=Fuzz
DJANGO_DB_HOST=db
DJANGO_DB_NAME=nest_db_fuzz
DJANGO_DB_USER=nest_user_fuzz
DJANGO_DB_PASSWORD=nest_user_fuzz_password
DJANGO_DB_PORT=5432
DJANGO_OPEN_AI_SECRET_KEY=None
DJANGO_PUBLIC_IP_ADDRESS="127.0.0.1"
DJANGO_REDIS_AUTH_ENABLED=True
DJANGO_REDIS_HOST=cache
DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password
DJANGO_RELEASE_VERSION=None
DJANGO_SECRET_KEY=None
DJANGO_SENTRY_DSN=None
DJANGO_SLACK_BOT_TOKEN=None
DJANGO_SLACK_SIGNING_SECRET=None
GITHUB_TOKEN=None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Address dotenv-linter ordering/quote warnings in the example file.

Line 1-24 trigger UnorderedKey and QuoteCharacter warnings. Consider reordering keys and removing the quotes around DJANGO_PUBLIC_IP_ADDRESS to keep lint clean.

🔧 Example reorder + quote removal
 DJANGO_ALGOLIA_APPLICATION_ID=None
 DJANGO_ALGOLIA_EXCLUDED_LOCAL_INDEX_NAMES=None
 DJANGO_ALGOLIA_WRITE_API_KEY=None
 DJANGO_ALLOWED_HOSTS=*
 DJANGO_AWS_ACCESS_KEY_ID=None
 DJANGO_AWS_SECRET_ACCESS_KEY=None
-DJANGO_SETTINGS_MODULE=settings.fuzz
 DJANGO_CONFIGURATION=Fuzz
 DJANGO_DB_HOST=db
 DJANGO_DB_NAME=nest_db_fuzz
-DJANGO_DB_USER=nest_user_fuzz
 DJANGO_DB_PASSWORD=nest_user_fuzz_password
 DJANGO_DB_PORT=5432
+DJANGO_DB_USER=nest_user_fuzz
 DJANGO_OPEN_AI_SECRET_KEY=None
-DJANGO_PUBLIC_IP_ADDRESS="127.0.0.1"
+DJANGO_PUBLIC_IP_ADDRESS=127.0.0.1
 DJANGO_REDIS_AUTH_ENABLED=True
 DJANGO_REDIS_HOST=cache
 DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password
 DJANGO_RELEASE_VERSION=None
 DJANGO_SECRET_KEY=None
 DJANGO_SENTRY_DSN=None
+DJANGO_SETTINGS_MODULE=settings.fuzz
 DJANGO_SLACK_BOT_TOKEN=None
 DJANGO_SLACK_SIGNING_SECRET=None
 GITHUB_TOKEN=None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
DJANGO_ALGOLIA_APPLICATION_ID=None
DJANGO_ALGOLIA_EXCLUDED_LOCAL_INDEX_NAMES=None
DJANGO_ALGOLIA_WRITE_API_KEY=None
DJANGO_ALLOWED_HOSTS=*
DJANGO_AWS_ACCESS_KEY_ID=None
DJANGO_AWS_SECRET_ACCESS_KEY=None
DJANGO_SETTINGS_MODULE=settings.fuzz
DJANGO_CONFIGURATION=Fuzz
DJANGO_DB_HOST=db
DJANGO_DB_NAME=nest_db_fuzz
DJANGO_DB_USER=nest_user_fuzz
DJANGO_DB_PASSWORD=nest_user_fuzz_password
DJANGO_DB_PORT=5432
DJANGO_OPEN_AI_SECRET_KEY=None
DJANGO_PUBLIC_IP_ADDRESS="127.0.0.1"
DJANGO_REDIS_AUTH_ENABLED=True
DJANGO_REDIS_HOST=cache
DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password
DJANGO_RELEASE_VERSION=None
DJANGO_SECRET_KEY=None
DJANGO_SENTRY_DSN=None
DJANGO_SLACK_BOT_TOKEN=None
DJANGO_SLACK_SIGNING_SECRET=None
GITHUB_TOKEN=None
DJANGO_ALGOLIA_APPLICATION_ID=None
DJANGO_ALGOLIA_EXCLUDED_LOCAL_INDEX_NAMES=None
DJANGO_ALGOLIA_WRITE_API_KEY=None
DJANGO_ALLOWED_HOSTS=*
DJANGO_AWS_ACCESS_KEY_ID=None
DJANGO_AWS_SECRET_ACCESS_KEY=None
DJANGO_CONFIGURATION=Fuzz
DJANGO_DB_HOST=db
DJANGO_DB_NAME=nest_db_fuzz
DJANGO_DB_PASSWORD=nest_user_fuzz_password
DJANGO_DB_PORT=5432
DJANGO_DB_USER=nest_user_fuzz
DJANGO_OPEN_AI_SECRET_KEY=None
DJANGO_PUBLIC_IP_ADDRESS=127.0.0.1
DJANGO_REDIS_AUTH_ENABLED=True
DJANGO_REDIS_HOST=cache
DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password
DJANGO_RELEASE_VERSION=None
DJANGO_SECRET_KEY=None
DJANGO_SENTRY_DSN=None
DJANGO_SETTINGS_MODULE=settings.fuzz
DJANGO_SLACK_BOT_TOKEN=None
DJANGO_SLACK_SIGNING_SECRET=None
GITHUB_TOKEN=None
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 8-8: [UnorderedKey] The DJANGO_CONFIGURATION key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 9-9: [UnorderedKey] The DJANGO_DB_HOST key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 10-10: [UnorderedKey] The DJANGO_DB_NAME key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The DJANGO_DB_USER key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 12-12: [UnorderedKey] The DJANGO_DB_PASSWORD key should go before the DJANGO_DB_USER key

(UnorderedKey)


[warning] 13-13: [UnorderedKey] The DJANGO_DB_PORT key should go before the DJANGO_DB_USER key

(UnorderedKey)


[warning] 14-14: [UnorderedKey] The DJANGO_OPEN_AI_SECRET_KEY key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 15-15: [QuoteCharacter] The value has quote characters (', ")

(QuoteCharacter)


[warning] 15-15: [UnorderedKey] The DJANGO_PUBLIC_IP_ADDRESS key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 16-16: [UnorderedKey] The DJANGO_REDIS_AUTH_ENABLED key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 17-17: [UnorderedKey] The DJANGO_REDIS_HOST key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 18-18: [UnorderedKey] The DJANGO_REDIS_PASSWORD key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 19-19: [UnorderedKey] The DJANGO_RELEASE_VERSION key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 20-20: [UnorderedKey] The DJANGO_SECRET_KEY key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)


[warning] 21-21: [UnorderedKey] The DJANGO_SENTRY_DSN key should go before the DJANGO_SETTINGS_MODULE key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@backend/.env.fuzz.example` around lines 1 - 24, Reorder the environment
variables in this example to satisfy dotenv-linter UnorderedKey warnings (e.g.,
sort keys alphabetically such that DJANGO_ALGOLIA_APPLICATION_ID... through
GITHUB_TOKEN are in proper order) and remove the unnecessary double quotes
around DJANGO_PUBLIC_IP_ADDRESS so it reads DJANGO_PUBLIC_IP_ADDRESS=127.0.0.1;
ensure all other entries use plain KEY=VALUE syntax (keep None for unset values
and preserve existing values like DJANGO_DB_HOST=db,
DJANGO_REDIS_PASSWORD=nest-fuzz-cache-password, etc.).

Comment on lines +87 to +93
finally:
try:
self._execute_sql("postgres", [f"DROP DATABASE IF EXISTS {temp_db};"])
except CalledProcessError:
self.stderr.write(
self.style.WARNING(f"Failed to drop temp DB {temp_db} (ignored).")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Incorrect exception type in cleanup block.

_execute_sql uses psycopg2.connect, not subprocess.run, so it will never raise CalledProcessError. The cleanup block should catch psycopg2 exceptions instead.

🐛 Proposed fix
+from psycopg2 import OperationalError
+
         finally:
             try:
                 self._execute_sql("postgres", [f"DROP DATABASE IF EXISTS {temp_db};"])
-            except CalledProcessError:
+            except (OperationalError, ProgrammingError):
                 self.stderr.write(
                     self.style.WARNING(f"Failed to drop temp DB {temp_db} (ignored).")
                 )
🤖 Prompt for AI Agents
In `@backend/apps/common/management/commands/dump_data.py` around lines 87 - 93,
The cleanup finally block currently catches CalledProcessError which
`_execute_sql` will never raise (it uses psycopg2.connect); change the exception
handler to catch psycopg2 exceptions (e.g., psycopg2.Error or
psycopg2.OperationalError) when attempting to drop `temp_db`, and ensure
`psycopg2` (or the specific Error class) is imported so the except clause
properly handles DB errors instead of `CalledProcessError`.

Comment on lines +102 to +106
def _remove_emails(self, tables: list[str]) -> list[str]:
return [
sql.SQL("UPDATE {table} SET email = '';").format(table=sql.Identifier(table))
for table in tables
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Type hint mismatch: returns sql.Composed, not str.

sql.SQL(...).format(...) returns a sql.Composed object, not a str. The type hint should reflect this.

🔧 Proposed fix
-    def _remove_emails(self, tables: list[str]) -> list[str]:
+    def _remove_emails(self, tables: list[str]) -> list[sql.Composed]:
         return [
             sql.SQL("UPDATE {table} SET email = '';").format(table=sql.Identifier(table))
             for table in tables
         ]
🤖 Prompt for AI Agents
In `@backend/apps/common/management/commands/dump_data.py` around lines 102 - 106,
The return type of _remove_emails is incorrect: it builds and returns
psycopg2.sql.Composed objects, not strings; update the type hint on def
_remove_emails(self, tables: list[str]) -> list[str] to reflect the actual type
(e.g., -> list[sql.Composed] or Sequence[sql.Composed]) and ensure the
psycopg2.sql module is imported/aliased as sql in the file so the annotation
resolves correctly.

Comment on lines +149 to 152
@cached_property
def project_requirements(self) -> ProjectHealthRequirements | None:
"""Get the project health requirements for the project's level."""
return ProjectHealthRequirements.objects.get(level=self.project.level)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the file structure and read the relevant section
cd backend/apps/owasp/models
head -200 project_health_metrics.py | tail -80

Repository: OWASP/Nest

Length of output: 3192


🏁 Script executed:

# Search for the class definition and context around this method
rg -n "class.*ProjectHealth" backend/apps/owasp/models/project_health_metrics.py -A 2

Repository: OWASP/Nest

Length of output: 164


🏁 Script executed:

# Find all usages of project_requirements in the codebase
rg -n "project_requirements" backend/ --type py

Repository: OWASP/Nest

Length of output: 998


🏁 Script executed:

# Check if there are any try-except blocks handling DoesNotExist for ProjectHealthRequirements
rg -n "ProjectHealthRequirements.DoesNotExist" backend/ --type py

Repository: OWASP/Nest

Length of output: 36


🏁 Script executed:

# Look for patterns where .get() is used with exception handling in the same file
rg -n "\.get\(" backend/apps/owasp/models/project_health_metrics.py -B 1 -A 2

Repository: OWASP/Nest

Length of output: 242


project_requirements raises when a row is missing instead of returning None.
The return type is optional and all downstream code guards against None (lines 95, 105, 119, 129, 144-145 all check if self.project_requirements), but .get() will throw DoesNotExist. Wrap with try-except to return None instead.

Proposed fix
     `@cached_property`
     def project_requirements(self) -> ProjectHealthRequirements | None:
         """Get the project health requirements for the project's level."""
-        return ProjectHealthRequirements.objects.get(level=self.project.level)
+        try:
+            return ProjectHealthRequirements.objects.get(level=self.project.level)
+        except ProjectHealthRequirements.DoesNotExist:
+            return None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@cached_property
def project_requirements(self) -> ProjectHealthRequirements | None:
"""Get the project health requirements for the project's level."""
return ProjectHealthRequirements.objects.get(level=self.project.level)
`@cached_property`
def project_requirements(self) -> ProjectHealthRequirements | None:
"""Get the project health requirements for the project's level."""
try:
return ProjectHealthRequirements.objects.get(level=self.project.level)
except ProjectHealthRequirements.DoesNotExist:
return None
🤖 Prompt for AI Agents
In `@backend/apps/owasp/models/project_health_metrics.py` around lines 149 - 152,
The cached_property project_requirements currently calls
ProjectHealthRequirements.objects.get(level=self.project.level) which will raise
DoesNotExist instead of returning None; change it to catch the model's
DoesNotExist (ProjectHealthRequirements.DoesNotExist) around the .get call and
return None when missing so the method's Optional[ProjectHealthRequirements]
contract is honored and downstream guards work as intended.

Comment on lines +64 to 76
@patch("apps.github.models.issue.Issue.objects")
def test_recent_issues_limit(self, mock_objects, mock_issue):
"""Test limiting the number of issues returned."""
mock_queryset = MagicMock()
mock_queryset.order_by.return_value.__getitem__.return_value = [mock_issue]
mock_select_related.return_value = mock_queryset
mock_queryset.order_by.return_value = mock_queryset
mock_queryset.filter.return_value = mock_queryset
mock_queryset.__getitem__.return_value = [mock_issue]
mock_objects.order_by.return_value = mock_queryset

result = IssueQuery().recent_issues(limit=1)

mock_select_related.assert_called_once()
mock_queryset.order_by.assert_called_once_with("-created_at")
mock_objects.order_by.assert_called_once_with("-created_at")
assert result == [mock_issue]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing slice assertion for limit verification.

This test verifies the limit parameter but doesn't assert that the slice was applied correctly. Other tests in this file (lines 149, 179) properly verify __getitem__ calls.

Proposed fix
         result = IssueQuery().recent_issues(limit=1)
 
         mock_objects.order_by.assert_called_once_with("-created_at")
         assert result == [mock_issue]
+        mock_queryset.__getitem__.assert_called_with(slice(None, 1))
🤖 Prompt for AI Agents
In `@backend/tests/apps/github/api/internal/queries/issue_test.py` around lines 64
- 76, The test test_recent_issues_limit should also assert that the queryset
slice was applied; after calling IssueQuery().recent_issues(limit=1) add an
assertion that mock_queryset.__getitem__ was called with the expected slice
(e.g., mock_queryset.__getitem__.assert_called_once_with(slice(None, 1, None)))
so the limit parameter is actually verified on the queryset returned by
Issue.objects.order_by.

Comment on lines +10 to +25
if not (BASE_URL := os.getenv("BASE_URL")) or not (CSRF_TOKEN := os.getenv("CSRF_TOKEN")):
message = "BASE_URL and CSRF_TOKEN must be set in the environment."
raise ValueError(message)

HEADERS = {
"Cookie": f"csrftoken={CSRF_TOKEN}",
"X-CSRFToken": CSRF_TOKEN,
}

logger = logging.getLogger(__name__)
schema = schemathesis.graphql.from_url(
f"{BASE_URL}/graphql/",
headers=HEADERS,
timeout=30,
wait_for_schema=10,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check pytest collection settings in config files
echo "=== Checking pytest config files ==="
fd -a -t f 'pytest\.ini|pyproject\.toml|setup\.cfg|tox\.ini' | head -20

echo ""
echo "=== Content of pytest config files ==="
fd -a -t f 'pytest\.ini|pyproject\.toml|setup\.cfg|tox\.ini' -x cat {}

echo ""
echo "=== Checking conftest.py files for collection settings ==="
fd -a -t f 'conftest\.py' -x cat {}

echo ""
echo "=== Searching for collection-related settings ==="
rg -n "testpaths|norecursedirs|python_files|addopts|collect_ignore" --type py --type toml --type ini

Repository: OWASP/Nest

Length of output: 7449


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Checking if tests/fuzz directory exists ==="
fd -a -t d 'tests.*fuzz|fuzz.*tests' | head -20

echo ""
echo "=== Listing tests/fuzz directory structure ==="
find tests/fuzz -type f 2>/dev/null | head -20

echo ""
echo "=== Checking for conftest.py in tests/fuzz ==="
cat tests/fuzz/conftest.py 2>/dev/null || echo "No conftest.py in tests/fuzz"

echo ""
echo "=== Checking graphql_test.py file ==="
cat -n tests/fuzz/graphql_test.py 2>/dev/null | head -30

echo ""
echo "=== Checking if there are any pytest markers or skip decorators in graphql_test.py ==="
rg -n "pytest.mark|pytest.skip|@skip" tests/fuzz/graphql_test.py

Repository: OWASP/Nest

Length of output: 149


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Searching for graphql_test.py ==="
fd -a -t f 'graphql_test.py'

echo ""
echo "=== Searching for files in tests directory ==="
fd -a -t f . tests/ 2>/dev/null | head -30

echo ""
echo "=== Checking if tests directory exists ==="
ls -la tests/ 2>/dev/null || echo "tests/ directory not found"

echo ""
echo "=== Listing all test files ==="
find . -name "*test*.py" -type f 2>/dev/null | head -20

Repository: OWASP/Nest

Length of output: 193


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Examining graphql_test.py content ==="
cat -n backend/tests/fuzz/graphql_test.py | head -50

echo ""
echo "=== Checking entire pytest config for testpaths or norecursedirs ==="
cat backend/pyproject.toml | grep -A 20 "\[tool.pytest"

echo ""
echo "=== Checking if tests directory structure ==="
find backend/tests -type f -name "*.py" 2>/dev/null | head -20

echo ""
echo "=== Check for any conftest.py in backend/tests ==="
find backend/tests -name conftest.py -type f -exec cat {} \;

Repository: OWASP/Nest

Length of output: 2814


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Checking if there's a conftest.py that might skip fuzz tests ==="
find backend/tests -name conftest.py -type f -exec sh -c 'echo "=== {} ===" && cat {}' \;

echo ""
echo "=== Testing if pytest collection would fail without env vars ==="
echo "Checking what happens when we try to import the module without env vars set..."
cd backend && python3 -c "import tests.fuzz.graphql_test" 2>&1 || echo "Import failed as expected (env vars not set)"
cd - > /dev/null

echo ""
echo "=== Verify: Is there any mechanism to exclude tests/fuzz from collection? ==="
rg -n "python_files|testpaths|norecursedirs|collect_ignore" backend/pyproject.toml

Repository: OWASP/Nest

Length of output: 816


Replace hard-failing environment check with module-level pytest skip.

Raising ValueError at import time will break pytest collection in CI/dev environments where BASE_URL and CSRF_TOKEN are not set. No pytest configuration excludes tests/fuzz from collection. Use pytest.skip(allow_module_level=True) instead.

Proposed fix
 import schemathesis
 from schemathesis.graphql.checks import GraphQLClientError
+import pytest
 
-if not (BASE_URL := os.getenv("BASE_URL")) or not (CSRF_TOKEN := os.getenv("CSRF_TOKEN")):
-    message = "BASE_URL and CSRF_TOKEN must be set in the environment."
-    raise ValueError(message)
+BASE_URL = os.getenv("BASE_URL")
+CSRF_TOKEN = os.getenv("CSRF_TOKEN")
+if not BASE_URL or not CSRF_TOKEN:
+    pytest.skip(
+        "BASE_URL and CSRF_TOKEN must be set in the environment.",
+        allow_module_level=True,
+    )
🤖 Prompt for AI Agents
In `@backend/tests/fuzz/graphql_test.py` around lines 10 - 25, The module
currently raises ValueError at import when BASE_URL or CSRF_TOKEN are missing;
replace that with a module-level pytest.skip to avoid breaking test collection:
import pytest, check BASE_URL and CSRF_TOKEN (the same checks using BASE_URL :=
os.getenv("BASE_URL") and CSRF_TOKEN := os.getenv("CSRF_TOKEN")), and if either
is missing call pytest.skip("BASE_URL and CSRF_TOKEN must be set in the
environment.", allow_module_level=True); then proceed to define HEADERS and
schema (schemathesis.graphql.from_url) as before so tests are only collected
when env vars exist.

Comment on lines +22 to +30
healthcheck:
interval: 10s
retries: 10
test: >
sh -c '
wget --spider http://backend:9000/a/
'
timeout: 10s
start_period: 5s
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for the /a/ endpoint definition in the backend
rg -n "path.*['\"]a/?['\"]" --type py -g '!*test*'

Repository: OWASP/Nest

Length of output: 114


🏁 Script executed:

# Also search for route definitions and health endpoints
rg -n "@.*route.*['\"]/?a['\"]|@.*get.*['\"]/?a['\"]" --type py -g '!*test*' -A 3

Repository: OWASP/Nest

Length of output: 36


🏁 Script executed:

# Look for health or healthcheck related endpoints
rg -n "health|liveness|readiness" --type py -g '!*test*' -i

Repository: OWASP/Nest

Length of output: 32033


🏁 Script executed:

# Search for any route that might be a health check endpoint
rg -n "def.*health|class.*Health.*View" --type py -g '!*test*' -g '!migrations' -A 2

Repository: OWASP/Nest

Length of output: 3354


🏁 Script executed:

# Check if there are any views or endpoints in the api directory
fd -t f "\.py$" backend/apps/*/api --exec grep -l "health\|liveness\|readiness" {} \;

Repository: OWASP/Nest

Length of output: 589


Use a dedicated health endpoint instead of the admin URL.

The healthcheck currently uses wget --spider http://backend:9000/a/ which maps to Django's admin site. This is fragile since admin endpoints aren't designed for health checks and may change. Create a simple, dedicated health endpoint (e.g., /health/) that returns 200 OK to signal application readiness.

🤖 Prompt for AI Agents
In `@docker-compose/e2e/compose.yaml` around lines 22 - 30, Change the healthcheck
to call a dedicated readiness endpoint rather than the admin URL: add a
lightweight Django view (e.g., health_view or health_check) that returns HTTP
200 OK and map it to a simple URL path like /health/ in your URLconf (e.g.,
include path('health/', health_view) in urls.py), then update the docker-compose
healthcheck test to hit http://backend:9000/health/ (using the same wget/curl
pattern used today) and keep the existing interval/retries/timeouts.

Comment on lines +37 to +39
COPY tests/apps tests/apps
COPY tests/__init__.py tests/__init__.py
COPY tests/conftest.py tests/conftest.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd docker/backend && find tests -type f -name "*.py" | sort

Repository: OWASP/Nest

Length of output: 94


🏁 Script executed:

cd docker/backend && find tests -type d | sort

Repository: OWASP/Nest

Length of output: 94


🏁 Script executed:

cd docker/backend && ls -la tests/ 2>/dev/null | head -20

Repository: OWASP/Nest

Length of output: 36


🏁 Script executed:

find . -name "Dockerfile.test" -type f

Repository: OWASP/Nest

Length of output: 86


🏁 Script executed:

find . -path "*/tests" -type d | head -20

Repository: OWASP/Nest

Length of output: 69


🏁 Script executed:

git ls-files | grep -E "tests/" | head -20

Repository: OWASP/Nest

Length of output: 1024


🏁 Script executed:

cat -n docker/backend/Dockerfile.test | head -50

Repository: OWASP/Nest

Length of output: 1905


🏁 Script executed:

git ls-files | grep "backend/tests/" | wc -l

Repository: OWASP/Nest

Length of output: 57


🏁 Script executed:

git ls-files | grep "backend/tests/"

Repository: OWASP/Nest

Length of output: 19336


Copy all test files to the container.

The current COPY commands omit the tests/fuzz/ directory (graphql_test.py, rest_test.py, and their init.py). These tests won't be discovered or executed. Replace the granular copies with a single command to ensure all test assets are available:

Suggested fix
-COPY tests/apps tests/apps
-COPY tests/__init__.py tests/__init__.py
-COPY tests/conftest.py tests/conftest.py
+COPY tests tests
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COPY tests/apps tests/apps
COPY tests/__init__.py tests/__init__.py
COPY tests/conftest.py tests/conftest.py
COPY tests tests
🤖 Prompt for AI Agents
In `@docker/backend/Dockerfile.test` around lines 37 - 39, The Dockerfile
currently copies test files piecemeal (COPY tests/apps tests/apps and COPY
tests/__init__.py and COPY tests/conftest.py) which omits tests/fuzz/; replace
those granular COPY lines with a single recursive copy that copies the entire
tests directory (e.g., using COPY tests tests) so graphql_test.py, rest_test.py
and their __init__.py under tests/fuzz are included; update/remove the three
existing COPY lines and verify subsequent steps reference the same target path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend backend-tests ci docker Pull requests that update Docker code docs Improvements or additions to documentation frontend makefile

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments