Skip to content

Implement fuzztesting.#1139

Merged
arkid15r merged 131 commits intoOWASP:feature/e2e-backendfrom
ahmedxgouda:feature/fuzz-testing
Dec 29, 2025
Merged

Implement fuzztesting.#1139
arkid15r merged 131 commits intoOWASP:feature/e2e-backendfrom
ahmedxgouda:feature/fuzz-testing

Conversation

@ahmedxgouda
Copy link
Collaborator

@ahmedxgouda ahmedxgouda commented Mar 19, 2025

Resolves #1075

  • Implemented an environment for fuzz testing for slack handlers, REST, and GraphQL endpoints.
  • Extended the CI/CD GitHub action workflow with this environment.
  • Added tests for the slack handlers.
  • Created a specialized Docker container for GraphQL fuzz-testing.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added fuzz testing capability with dedicated workflows and infrastructure support.
    • Enhanced E2E testing setup with improved backend initialization and data loading.
  • Bug Fixes

    • Made several GraphQL fields nullable: latest_release, average_score, and contributor fields.
    • Removed deprecated owner_key field from repository schema.
  • Documentation

    • Updated contribution guide with separate sections for running E2E and fuzz tests.
    • Added fuzz testing command reference and debugging instructions.
  • Chores

    • Infrastructure updates for test environment orchestration.
    • Database and configuration improvements for testing workflows.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

This pull request implements comprehensive fuzz testing infrastructure for the OWASP Nest backend, including new Makefile targets, Docker Compose services, GitHub Actions workflow jobs, Django settings configuration, and GraphQL schema refinements. Changes span backend configuration, CI/CD pipelines, data orchestration, and model/node updates.

Changes

Cohort / File(s) Summary
Fuzz Testing Infrastructure
backend/Makefile, backend/docker/Dockerfile.fuzz, backend/docker/entrypoint.fuzz.sh, backend/settings/fuzz.py, backend/.env.fuzz.example, .gitignore
New Makefile targets (exec-backend-command-fuzz, exec-db-command-fuzz, load-data-fuzz, run-backend-fuzz, test-fuzz, purge-data-fuzz) and corresponding Docker Dockerfile for fuzz environment. New Django settings module Fuzz class with IS_FUZZ_ENVIRONMENT flag and site configuration. Fuzz entrypoint script fetches CSRF token and generates config.toml for graphqler fuzzing tool.
Docker Compose Orchestration
docker-compose/fuzz.yaml, docker-compose/e2e.yaml
New complete fuzz.yaml with backend, db, data-loader, and graphql services with health checks and startup sequencing. Updated e2e.yaml to add data-loader service and rename frontend to e2e-tests; backend now includes migration step in startup command.
CI/CD Workflow Updates
.github/workflows/run-ci-cd.yaml, .github/workflows/update-nest-test-images.yaml, .github/workflows/setup-backend-environment/action.yaml
New run-fuzz-tests job provisioning Postgres service, backend setup, fuzz image build, and results upload. Added fuzz-test-backend image build step. New composite setup-backend-environment action for E2E/fuzz environment readiness polling and Docker build.
Frontend Testing Updates
frontend/Makefile, CONTRIBUTING.md
test-frontend-e2e now pre-removes e2e database container and targets db, backend, data-loader, e2e-tests services sequentially. Documentation reorganized with separate "Running Fuzz Tests" section and simplified E2E testing steps.
Backend Environment & Data
backend/settings/base.py, backend/apps/common/management/commands/dump_data.py, backend/docker/Dockerfile.local
Added IS_FUZZ_ENVIRONMENT flag to Base settings. Changed dump_data output from nest.sql.gz to nest.dump with updated pg_dump options (--data-only, --no-owner, --no-privileges, --format=custom). Updated Dockerfile.local PostgreSQL client from postgresql-client to postgresql16-client.
Permission & Access Control
backend/apps/owasp/api/internal/permissions/project_health_metrics.py, backend/apps/owasp/api/internal/views/permissions.py
Both files now short-circuit permission checks to True when IS_E2E_ENVIRONMENT or IS_FUZZ_ENVIRONMENT flags are set, bypassing normal staff authentication checks in test environments.
GraphQL Schema: Nullable Types
backend/apps/owasp/api/internal/nodes/project_health_stats.py, backend/apps/github/api/internal/nodes/repository.py
ProjectHealthStatsNode.average_score changed from float to float | None. RepositoryNode.latest_release changed from str to str | None. Removed RepositoryNode.owner_key field entirely.
GraphQL Schema: Collections & Enums
backend/apps/github/api/internal/nodes/repository_contributor.py, backend/apps/github/api/internal/queries/milestone.py
RepositoryContributorNode.project_key and project_name now optional (str | None = None). Introduced MilestoneStateEnum for type-safe state filtering; MilestoneQuery.recent_milestones now accepts MilestoneStateEnum | None instead of string.
Model Refactoring: Contributors & Metrics
backend/apps/github/models/repository_contributor.py, backend/apps/owasp/models/project_health_metrics.py
get_top_contributors now includes id field from user__login. ProjectHealthMetrics.project_requirements changed to cached_property returning ProjectHealthRequirements | None with safe None-handling in accessors.
Model Refactoring: Entity Leaders & Active Counts
backend/apps/owasp/models/common.py, backend/apps/owasp/models/project.py, backend/apps/owasp/models/chapter.py, backend/apps/owasp/models/committee.py
entity_leaders converted to @cached_property returning list[EntityMember] instead of QuerySet. Active chapter/committee/project counts replaced with Django ORM filter queries (removing IndexBase dependency): chapters filter on is_active/lat/lng/repo; committees on has_active_repositories; projects on is_active/has_active_repositories.
GraphQL Node Refactoring
backend/apps/owasp/api/internal/nodes/common.py, backend/apps/owasp/api/internal/nodes/project.py, backend/apps/owasp/api/internal/nodes/snapshot.py
GenericEntityNode.related_urls changed to return self.related_urls (potential recursion). ProjectNode.health_metrics_list/health_metrics_latest updated to simpler filter chains (removed get_latest_health_metrics). SnapshotNode base class changed from GenericEntityNode to strawberry.relay.Node.
Query Logic Updates
backend/apps/github/api/internal/queries/pull_request.py
Project filtering now safely handles missing project by fetching instance first; applies repository filter only if project exists, else returns empty queryset.
Test Updates: GraphQL Nodes & Enums
backend/tests/apps/github/api/internal/nodes/repository_test.py, backend/tests/apps/github/api/internal/queries/milestone_test.py
Removed owner_key field from RepositoryNode tests. Updated latest_release type assertion. Changed top_contributors test to expect RepositoryContributorNode instances. Migrated milestone tests from string-based states to MilestoneStateEnum with None for default.
Test Updates: Data Models
backend/tests/apps/github/models/repository_contributor_test.py, backend/tests/apps/owasp/models/chapter_test.py, backend/tests/apps/owasp/models/committee_test.py, backend/tests/apps/owasp/models/project_health_stats_test.py
Updated to reflect new data shapes (contributor id field, fuzz field removal). Replaced IndexBase mocking with Django ORM filter mocking (chapters, committees) and assertions now validate filter calls instead of direct count calls.
Test Updates: Node & Common
backend/tests/apps/owasp/api/internal/nodes/common_test.py, backend/tests/apps/common/management/commands/dump_data_test.py
related_urls test updated to use non-indexed attribute name. dump_data test updated to reflect new output path (nest.dump) and pg_dump options.
Custom Dictionary
cspell/custom-dict.txt
Added graphqler and openblas to spell-check dictionary.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • OWASP/Nest#2429: Modifies backend e2e/fuzz orchestration (Makefile exec-* targets, docker-compose services, CI setup) with shared build/CI infrastructure surface.
  • OWASP/Nest#1718: Touches backend project health metrics GraphQL implementation (nodes, ordering, queries) and frontend integration.
  • OWASP/Nest#2710: Modifies backend test-environment handling with E2E/fuzz environment flags and startup configuration.

Suggested reviewers

  • kasya

Pre-merge checks and finishing touches

❌ Failed checks (2 inconclusive)
Check name Status Explanation Resolution
Linked Issues check ❓ Inconclusive The PR implements most core requirements from #1075: fuzz testing environment setup, CI/CD integration, and Docker container for fuzzing. However, the raw summary provides limited evidence of GraphQL endpoint fuzz tests, Slack handler tests, and REST endpoint tests being fully implemented or integrated into CI/CD. Verify that GraphQL endpoint fuzz tests, Slack handler tests, and REST endpoint tests are implemented and properly integrated into the CI/CD pipeline to meet #1075 acceptance criteria.
Out of Scope Changes check ❓ Inconclusive The changeset includes focused fuzz-testing infrastructure (Dockerfiles, Makefile targets, settings, workflows) and several refactoring changes to GraphQL nodes, model queries, and property types that appear peripheral to the core fuzz-testing objective. Clarify whether refactoring changes (e.g., repository node fields, health metrics types, common model caching) are necessary dependencies for fuzz testing or separate improvements that should be addressed in different PRs.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'Implement fuzztesting' clearly and concisely describes the main objective of this comprehensive changeset—adding fuzz testing infrastructure to the project.
Description check ✅ Passed The PR description provides relevant context about implementing fuzz testing for Slack handlers, REST, and GraphQL endpoints, and mentions CI/CD integration and Docker container creation.
Docstring Coverage ✅ Passed Docstring coverage is 83.93% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9886459 and 00b1f84.

📒 Files selected for processing (4)
  • .github/workflows/run-ci-cd.yaml
  • backend/Makefile
  • backend/data/nest.dump
  • cspell/custom-dict.txt
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/Makefile
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.
📚 Learning: 2025-12-26T06:57:19.911Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: For the OWASP/Nest repository, Redis image versions should remain consistent across all environments (production, staging, local, E2E, and CI/CD E2E tests). When upgrading Redis, update all docker-compose files and CI/CD workflow configurations together to maintain environment parity.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run fuzz tests
  • GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (6)
cspell/custom-dict.txt (1)

71-71: Both dictionary entries are correctly placed and necessary:

  • "graphqler" (line 71): Confirmed used in backend/docker/Dockerfile.fuzz (installed via pip) and backend/docker/entrypoint.fuzz.sh (executed for GraphQL fuzz testing). Correctly positioned alphabetically between "graphiql" and "gunicorn".
  • "openblas" (line 107): Confirmed as a system dependency in backend/docker/Dockerfile.fuzz (installed via apk). Correctly positioned alphabetically.

Both entries are required for the fuzz-testing implementation and present no issues.

.github/workflows/run-ci-cd.yaml (5)

236-269: LGTM! E2E backend environment setup is well-structured.

The backend startup sequence is properly orchestrated:

  • Composite action handles environment preparation
  • Migrations run before the server binds
  • Readiness polling ensures the backend is available before data loading
  • Data restoration follows standard PostgreSQL practices

This pattern provides a solid foundation for the fuzz testing job.


302-322: Verify that staging deployment should proceed without fuzz test results.

The run-fuzz-tests job is not listed as a dependency for build-staging-images (Line 403-407), meaning staging deployment will proceed even if fuzz tests fail or find issues.

This may be intentional during initial rollout, but please confirm this aligns with your quality gates. If fuzz tests should gate deployment, add run-fuzz-tests to the needs array of the build-staging-images job.


323-363: LGTM! Fuzz backend startup follows E2E patterns.

The fuzz environment setup correctly mirrors the E2E job structure:

  • Isolated port (9500 vs E2E's 9000) prevents conflicts
  • Migrations run before server start
  • Readiness polling ensures stability before fuzzing begins
  • Fuzz-specific credentials maintain proper isolation

387-393: LGTM! Artifact upload is correctly configured.

The fuzzing results upload properly implements all requirements:

  • Runs unconditionally with if: always() to capture failure scenarios
  • Action pinned to v6 SHA for reproducibility
  • Path matches the volume-mounted directory from the test execution
  • 30-day retention provides adequate review window

365-377: No action needed — the Docker context path is correct.

The fuzz image build context backend/docker is appropriate because Dockerfile.fuzz is a specialized fuzzing tool container that only requires the entrypoint script (entrypoint.fuzz.sh), which is located in backend/docker/. Unlike other backend images (test, staging, production) that need source files like manage.py, apps/, and requirements.txt, the fuzz image communicates with the running backend application via HTTP using the BASE_URL environment variable to perform fuzzing operations with graphqler. It does not need backend source code files in its build context.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
backend/tests/core/api/algolia_test.py (2)

143-149: Consider adding assumptions to narrow down test inputs.

The current strategies generate completely random text for index_name and query, which might lead to many test failures for invalid inputs. Since we're testing that the function handles inputs properly (not crashes), we could add assumptions to focus on more realistic inputs.

Consider adding assumptions or using more specific strategies:

    @hypothesis.given(
-       index_name=hypothesis.strategies.text(),
+       index_name=hypothesis.strategies.sampled_from(["projects", "chapters", "users", "committees", "issues"]),
        query=hypothesis.strategies.text(),
        page=hypothesis.strategies.integers(min_value=1),
        hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000),
        facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()),
    )
+   @hypothesis.assume(hypothesis.strategies.text(min_size=1))
    def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):

This would focus testing on valid index names from the existing test cases while still testing random values for other parameters.


143-166: Add mocking for get_search_results to isolate the test.

The current test calls the actual algolia_search function without mocking its dependencies, which makes it an integration test rather than a unit test. For a proper unit test, we should mock the get_search_results function.

Add mocking to isolate the test:

    @hypothesis.given(
        index_name=hypothesis.strategies.text(),
        query=hypothesis.strategies.text(),
        page=hypothesis.strategies.integers(min_value=1),
        hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000),
        facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()),
    )
    def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
        """Test the algolia_search function with fuzz testing."""
+       with patch(
+           "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS
+       ):
            mock_request = Mock()
            mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
            mock_request.method = "POST"
            mock_request.body = json.dumps(
                {
                    "facetFilters": facet_filters,
                    "hitsPerPage": hits_per_page,
                    "indexName": index_name,
                    "page": page,
                    "query": query,
                }
            )

            response = algolia_search(mock_request)

            assert response.status_code != requests.codes.server_error

This would test only the algolia_search function's input validation and error handling, not the actual search functionality.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bf6ccb7 and bc40efb.

⛔ Files ignored due to path filters (1)
  • backend/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • backend/pyproject.toml (1 hunks)
  • backend/tests/core/api/algolia_test.py (2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
  • algolia_search (44-90)
🪛 GitHub Actions: Run CI/CD
backend/pyproject.toml

[warning] 1-1: The top-level linter settings are deprecated in favour of their counterparts in the lint section. Please update the following options in backend/pyproject.toml: - 'per-file-ignores' -> 'lint.per-file-ignores'


[warning] 1-1: one-blank-line-before-class (D203) and no-blank-line-before-class (D211) are incompatible. Ignoring one-blank-line-before-class.


[warning] 1-1: multi-line-summary-first-line (D212) and multi-line-summary-second-line (D213) are incompatible. Ignoring multi-line-summary-second-line.

backend/tests/core/api/algolia_test.py

[error] 1-1: Ruff: Found 1 error (1 fixed, 0 remaining).

🔇 Additional comments (2)
backend/pyproject.toml (1)

53-53: Good addition of hypothesis for fuzz testing.

Adding the Hypothesis library as a development dependency is appropriate for implementing fuzz testing. The version constraint (^6.129.4) follows the project's versioning pattern.

backend/tests/core/api/algolia_test.py (1)

6-6: Appropriate import for the hypothesis library.

This import is necessary for the fuzz testing implementation below.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
backend/tests/core/api/algolia_test.py (3)

143-167: Consider additional strategies for more comprehensive testing

While the current implementation is solid, you might consider enhancing it with additional strategies to test more edge cases:

  1. Test with empty strings and very long strings
  2. Test with special characters that might cause issues in JSON processing
  3. Consider adding a separate test case for malformed JSON

Here's how you could enhance the test:

    @hypothesis.given(
        index_name=hypothesis.strategies.text(),
        query=hypothesis.strategies.text(),
        page=hypothesis.strategies.integers(min_value=1),
        hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000),
        facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()),
    )
    def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
        """Test the algolia_search function with fuzz testing."""
        mock_request = Mock()
        mock_request.method = "POST"
        mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
        mock_request.body = json.dumps(
            {
                "facetFilters": facet_filters,
                "hitsPerPage": hits_per_page,
                "indexName": index_name,
                "page": page,
                "query": query,
            }
        )

        response = algolia_search(mock_request)

        assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
+
+    @hypothesis.given(
+        malformed_json=hypothesis.strategies.text(),
+    )
+    def test_fuzz_algolia_search_malformed_json(self, malformed_json):
+        """Test the algolia_search function with malformed JSON."""
+        mock_request = Mock()
+        mock_request.method = "POST"
+        mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
+        mock_request.body = malformed_json
+
+        response = algolia_search(mock_request)
+
+        # Even with malformed JSON, we should never see a server error
+        assert response.status_code != requests.codes.server_error

143-167: Consider adding explicit patching of external dependencies

The current test might make actual API calls if the validation passes. Consider patching external dependencies to ensure the test remains isolated and fast.

Here's how you could modify the test:

    @hypothesis.given(
        index_name=hypothesis.strategies.text(),
        query=hypothesis.strategies.text(),
        page=hypothesis.strategies.integers(min_value=1),
        hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000),
        facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()),
    )
    def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
        """Test the algolia_search function with fuzz testing."""
+       with patch("apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS):
            mock_request = Mock()
            mock_request.method = "POST"
            mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
            mock_request.body = json.dumps(
                {
                    "facetFilters": facet_filters,
                    "hitsPerPage": hits_per_page,
                    "indexName": index_name,
                    "page": page,
                    "query": query,
                }
            )

            response = algolia_search(mock_request)

            assert response.status_code in [requests.codes.ok, requests.codes.bad_request]

143-167: Consider adding documentation about fuzz testing

To help other developers understand the purpose and benefits of fuzz testing, consider adding more detailed documentation in the test docstring.

    @hypothesis.given(
        index_name=hypothesis.strategies.text(),
        query=hypothesis.strategies.text(),
        page=hypothesis.strategies.integers(min_value=1),
        hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000),
        facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()),
    )
    def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
-       """Test the algolia_search function with fuzz testing."""
+       """Test the algolia_search function with fuzz testing.
+       
+       This test uses the Hypothesis library to generate random inputs for the algolia_search
+       function. It verifies that the function does not return a server error (500) regardless
+       of the input provided. For valid inputs, it should return 200 OK, and for invalid inputs,
+       it should return 400 Bad Request.
+       
+       The test generates:
+       - Random text for index_name and query
+       - Positive integers for page
+       - Integers between 1 and 1000 for hits_per_page
+       - Lists of text strings for facet_filters
+       """
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6c18da7 and 4a17780.

📒 Files selected for processing (1)
  • backend/tests/core/api/algolia_test.py (2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
  • algolia_search (44-90)
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: CodeQL (python)
  • GitHub Check: CodeQL (javascript-typescript)
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run frontend unit tests
🔇 Additional comments (5)
backend/tests/core/api/algolia_test.py (5)

4-4: Import for hypothesis added correctly

Good job importing the hypothesis library for fuzz testing. This is a powerful tool for property-based testing that can help identify edge cases and improve robustness.


143-149: Good use of hypothesis strategies

The strategies chosen for each parameter are appropriate:

  • Text strategies for index_name, query, and items in facet_filters
  • Integer strategy with sensible constraints for page (min_value=1)
  • Integer strategy with appropriate bounds for hits_per_page (1-1000)
  • List strategy for facet_filters

These cover a broad range of inputs that could potentially trigger edge cases.


150-154: Correctly implemented mock request with IP address

Great job including the HTTP_X_FORWARDED_FOR header with the client IP address. This addresses a previous review comment and ensures that the test properly handles the case when index_name is "chapters", which requires the IP address for the cache key.


155-163: Well-structured JSON for the request body

The request body is correctly structured with all the parameters needed for the Algolia search function. Using json.dumps() ensures proper JSON formatting.


165-167: Improved assertion for response status code

The assertion has been improved to check that the response status code is either requests.codes.ok (200) or requests.codes.bad_request (400), addressing the previous review comment. This ensures that the function handles both valid and invalid inputs correctly without returning a server error.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
backend/tests/core/api/algolia_test.py (1)

143-167: Good implementation of fuzz testing for algolia_search, but consider adding constraints to strategies

The fuzz testing implementation looks good and addresses the previous review comments with the addition of the IP address and proper assertions. However, the strategies for page and hits_per_page don't have any constraints, which might generate values that will always fail validation.

Consider updating the strategies to match the validation constraints in the production code:

    @given(
        index_name=strategies.text(),
        query=strategies.text(),
-       page=strategies.integers(),
+       page=strategies.integers(min_value=1),
-       hits_per_page=strategies.integers(),
+       hits_per_page=strategies.integers(min_value=1, max_value=1000),
        facet_filters=strategies.lists(strategies.text()),
    )

This will make the test more focused on finding edge cases within valid input ranges rather than testing inputs that would be rejected by validation.

backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)

1-41: Fix linting issues with whitespace and newline

The file has several whitespace issues flagged by the static analysis tool.

Remove trailing whitespace from blank lines and add a newline at the end of the file:

# Remove whitespace from blank lines on lines 24, 26, 28, 37, 39, 41
# Add newline at the end of the file
🧰 Tools
🪛 Ruff (0.8.2)

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)

1-39: Fix whitespace issues in blank lines

There are whitespace issues in blank lines that should be fixed.

Remove trailing whitespace from blank lines 34 and 37 as flagged by the static analysis tool.

🧰 Tools
🪛 Ruff (0.8.2)

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a17780 and 75aa9fe.

📒 Files selected for processing (3)
  • backend/tests/core/api/algolia_test.py (2 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1 hunks)
🧰 Additional context used
🧬 Code Definitions (2)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
  • algolia_search (44-90)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
  • test_handler_responses (32-40)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🔇 Additional comments (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)

1-41:

Details

✅ Verification successful

Directory name has a typo

The path backend/tests/slack/events/fuzz_tests/user_joind_channel contains a typo in joind which should be joined.

Consider renaming this directory to fix the typo and ensure path consistency with the rest of the codebase.


🏁 Script executed:

#!/bin/bash
# Check if the correct directory exists
fd "user_joined_channel" --type d

# Check if the typo exists elsewhere in the codebase
fd "user_joind_channel" --type d

Length of output: 172


Directory Name Typo Resolved

The directory in question is now correctly named. The previous typo (user_joind_channel) has been fixed to user_joined_channel in the fuzz_tests directory.

  • The file path now reads: backend/tests/slack/events/fuzz_tests/user_joined_channel/gsoc_test.py
🧰 Tools
🪛 Ruff (0.8.2)

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

Comment on lines 29 to 40
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strengthen handler_responses test with mock verifications

The current test executes the handler but doesn't verify that it behaves correctly based on the events_enabled value.

Update the test to verify the expected behavior:

    @given(
        events_enabled=st.booleans(),
    )
    def test_handler_responses(self, events_enabled):
        settings.SLACK_EVENTS_ENABLED = events_enabled
        mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
        
        gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
        
-        assert True
+        # Verify that the Slack client was used appropriately based on events_enabled
+        if events_enabled:
+            mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
+            mock_slack_client.chat_postMessage.assert_called()
+        else:
+            # If events are disabled, no Slack client operations should be performed
+            mock_slack_client.conversations_open.assert_not_called()
+            mock_slack_client.chat_postMessage.assert_not_called()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@given(
events_enabled=st.booleans(),
)
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
assert True
@given(
events_enabled=st.booleans(),
)
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
# Verify that the Slack client was used appropriately based on events_enabled
if events_enabled:
mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
mock_slack_client.chat_postMessage.assert_called()
else:
# If events are disabled, no Slack client operations should be performed
mock_slack_client.conversations_open.assert_not_called()
mock_slack_client.chat_postMessage.assert_not_called()
🧰 Tools
🪛 Ruff (0.8.2)

37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)

Comment on lines 13 to 38
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add constraints to fuzz test parameters and strengthen assertions

The current test doesn't place constraints on generated integers and uses a trivial assertion.

Update the test with improved constraints and assertions:

    @given(
        events_enabled=st.booleans(),
-       project_count=st.integers(),
-       issue_count=st.integers(),
+       project_count=st.integers(min_value=0),
+       issue_count=st.integers(min_value=0),
    )
    @patch("apps.owasp.models.project.Project.active_projects_count")
    @patch("apps.github.models.issue.Issue.open_issues_count")
    def test_handler_responses(
        self,
        mock_open_issues_count,
        mock_active_projects_count,
        events_enabled,
        project_count,
        issue_count,
    ):
        settings.SLACK_EVENTS_ENABLED = events_enabled
        mock_active_projects_count.return_value = project_count
        mock_open_issues_count.return_value = issue_count
        mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
        
        contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
        
-       assert True
+       # Verify the behavior based on whether events are enabled
+       if events_enabled:
+           mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
+           mock_slack_client.chat_postMessage.assert_called()
+           # You could also verify that the message contains the project_count and issue_count
+       else:
+           mock_slack_client.conversations_open.assert_not_called()
+           mock_slack_client.chat_postMessage.assert_not_called()

Using non-negative integers for counts makes the test more realistic, as negative counts wouldn't make sense in this context.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@given(
events_enabled=st.booleans(),
project_count=st.integers(),
issue_count=st.integers(),
)
@patch("apps.owasp.models.project.Project.active_projects_count")
@patch("apps.github.models.issue.Issue.open_issues_count")
def test_handler_responses(
self,
mock_open_issues_count,
mock_active_projects_count,
events_enabled,
project_count,
issue_count,
):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_active_projects_count.return_value = project_count
mock_open_issues_count.return_value = issue_count
mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
assert True
@given(
events_enabled=st.booleans(),
project_count=st.integers(min_value=0),
issue_count=st.integers(min_value=0),
)
@patch("apps.owasp.models.project.Project.active_projects_count")
@patch("apps.github.models.issue.Issue.open_issues_count")
def test_handler_responses(
self,
mock_open_issues_count,
mock_active_projects_count,
events_enabled,
project_count,
issue_count,
):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_active_projects_count.return_value = project_count
mock_open_issues_count.return_value = issue_count
mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
# Verify the behavior based on whether events are enabled
if events_enabled:
mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
mock_slack_client.chat_postMessage.assert_called()
# You could also verify that the message contains the project_count and issue_count
else:
mock_slack_client.conversations_open.assert_not_called()
mock_slack_client.chat_postMessage.assert_not_called()
🧰 Tools
🪛 Ruff (0.8.2)

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
backend/tests/core/api/algolia_test.py (1)

143-149: 🛠️ Refactor suggestion

Specify appropriate boundaries for integer strategies

The page and hits_per_page strategies should have appropriate bounds to align with real-world constraints and avoid generating invalid inputs that would make the test unreliable.

    @given(
        index_name=strategies.text(),
        query=strategies.text(),
-       page=strategies.integers(),
-       hits_per_page=strategies.integers(),
+       page=strategies.integers(min_value=1),
+       hits_per_page=strategies.integers(min_value=1, max_value=1000),
        facet_filters=strategies.lists(strategies.text()),
    )
🧹 Nitpick comments (1)
backend/tests/core/api/algolia_test.py (1)

150-167: The fuzz test implementation is good, but could benefit from a small enhancement

Great job implementing this fuzz test! The inclusion of the IP address and assertion for both success and bad request codes matches the expected behavior of the algolia_search function.

Consider adding an additional test that specifically tests malformed JSON handling:

@given(invalid_json=strategies.text())
def test_fuzz_algolia_search_malformed_json(self, invalid_json):
    """Test the algolia_search function with malformed JSON."""
    mock_request = Mock()
    mock_request.method = "POST"
    mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
    mock_request.body = invalid_json

    response = algolia_search(mock_request)

    assert response.status_code == requests.codes.server_error
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a17780 and 75aa9fe.

📒 Files selected for processing (3)
  • backend/tests/core/api/algolia_test.py (2 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1 hunks)
🧰 Additional context used
🧬 Code Definitions (3)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1) (1)
  • test_handler_responses (20-38)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
  • test_handler_responses (32-40)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
  • algolia_search (44-90)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

🔇 Additional comments (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)

13-38:

Details

❓ Verification inconclusive

Add meaningful assertions to validate the contribute handler behavior

The current test with assert True only verifies the code doesn't throw an exception, but doesn't validate that contribute_handler actually performs the expected actions based on the input parameters.

    def test_handler_responses(
        self,
        mock_open_issues_count,
        mock_active_projects_count,
        events_enabled,
        project_count,
        issue_count,
    ):
        settings.SLACK_EVENTS_ENABLED = events_enabled
        mock_active_projects_count.return_value = project_count
        mock_open_issues_count.return_value = issue_count
        mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
        

        contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
        
-       assert True
+       # When events are enabled, the client should attempt to open a conversation
+       if events_enabled:
+           mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+           
+           # Verify the correct count values were accessed
+           mock_active_projects_count.assert_called_once()
+           mock_open_issues_count.assert_called_once()
+       else:
+           # When events are disabled, no client interaction should occur
+           mock_slack_client.conversations_open.assert_not_called()

Additionally, fix the formatting issues highlighted by the static analyzer:


🏁 Script executed:

#!/bin/bash
# Replace trailing whitespace
sed -i 's/[[:space:]]*$//' backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py

Length of output: 104


Test Enhancements & Formatting Fixes: Please Verify

  • The trailing whitespace issues in backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py have been cleaned up.
  • The test should now include meaningful assertions to verify that the contribute_handler behaves as expected:
    • When settings.SLACK_EVENTS_ENABLED is true, the Slack client’s conversations_open must be called with the correct user and both active_projects_count and open_issues_count should be accessed exactly once.
    • When settings.SLACK_EVENTS_ENABLED is false, no call to conversations_open should occur.
  • Please confirm that the following diff (replacing assert True) has been incorporated into the test:
    def test_handler_responses(
        self,
        mock_open_issues_count,
        mock_active_projects_count,
        events_enabled,
        project_count,
        issue_count,
    ):
        settings.SLACK_EVENTS_ENABLED = events_enabled
        mock_active_projects_count.return_value = project_count
        mock_open_issues_count.return_value = issue_count
        mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
        

        contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
        
-       assert True
+       if events_enabled:
+           mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+           mock_active_projects_count.assert_called_once()
+           mock_open_issues_count.assert_called_once()
+       else:
+           mock_slack_client.conversations_open.assert_not_called()

Action: Please verify manually that the meaningful assertions have been applied alongside the formatting fixes to confidently validate the handler’s behavior.

🧰 Tools
🪛 Ruff (0.8.2)

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

Comment on lines 29 to 41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Verify handler behavior with concrete assertions

Similar to the previous test method, this test only checks that the code runs without exceptions, but doesn't validate that the handler behaves as expected when events are enabled or disabled.

    def test_handler_responses(self, events_enabled):
        settings.SLACK_EVENTS_ENABLED = events_enabled
        mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
        
        gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
        
-       assert True
+       # When events are enabled, the client should attempt to open a conversation
+       if events_enabled:
+           mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+       else:
+           mock_slack_client.conversations_open.assert_not_called()

Additionally, fix the formatting issues highlighted by the static analyzer:


🏁 Script executed:

#!/bin/bash
# Replace trailing whitespace and ensure a newline at end of file
sed -i 's/[[:space:]]*$//' backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
echo '' >> backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py

Length of output: 108


Action Required: Add concrete assertions for Slack handler behavior

The current test in
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
merely ensures that no exceptions occur instead of validating the actual behavior. Please update the test as follows:

  • When events are enabled: Assert that mock_slack_client.conversations_open is called once with the proper user attribute from mock_slack_event.
  • When events are disabled: Assert that mock_slack_client.conversations_open is never called.

For example, consider replacing:

    gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
    
-    assert True
+    if events_enabled:
+        mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+    else:
+        mock_slack_client.conversations_open.assert_not_called()

Additionally, the formatting issues reported (trailing whitespace removal and ensuring a trailing newline) have been addressed by the static analyzer script. Please verify manually that these formatting fixes have been applied correctly to the file.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@given(
events_enabled=st.booleans(),
)
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
assert True
@given(
events_enabled=st.booleans(),
)
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
# When events are enabled, the client should attempt to open a conversation
if events_enabled:
mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
else:
mock_slack_client.conversations_open.assert_not_called()
🧰 Tools
🪛 Ruff (0.8.2)

37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
backend/tests/core/api/algolia_test.py (1)

167-167: Add more specific assertions for different input scenarios.

The current assertion only checks that the response status code is either OK or BAD_REQUEST, which verifies the handler doesn't crash. However, it would be more valuable to verify specific behavior based on input patterns.

-        assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
+        # For valid inputs (positive page and reasonable hits_per_page), expect successful response
+        if page > 0 and 1 <= hits_per_page <= 1000:
+            # Some inputs might still be invalid based on other validations
+            assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
+        else:
+            # For invalid inputs, expect bad request
+            assert response.status_code == requests.codes.bad_request
+        
+        # Ensure we never get server errors
+        assert response.status_code != requests.codes.server_error
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (3)

1-39: Directory name contains a typo.

The directory name user_joind_channel contains a typo and should be user_joined_channel to maintain consistency with the actual event name.

The file is in a directory with a typo: user_joind_channel should be user_joined_channel.

🧰 Tools
🪛 Ruff (0.8.2)

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


34-37: Fix whitespace in blank lines.

There are blank lines containing whitespace, which should be removed for consistency.

        mock_slack_client = MagicMock()
        mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
-        
+

        contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
-        
+
🧰 Tools
🪛 Ruff (0.8.2)

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


13-17: Add constraints to integer strategies.

The current implementation uses unconstrained integers for project and issue counts, which could lead to unrealistic test scenarios and potential overflow issues.

    @given(
        events_enabled=st.booleans(),
-        project_count=st.integers(),
-        issue_count=st.integers(),
+        project_count=st.integers(min_value=0, max_value=10000),
+        issue_count=st.integers(min_value=0, max_value=10000),
    )

This change provides more realistic bounds for these values while still testing a wide range of scenarios.

backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (3)

1-41: Directory name contains a typo.

The directory name user_joind_channel contains a typo and should be user_joined_channel to maintain consistency with the actual event name.

The file is in a directory with a typo: user_joind_channel should be user_joined_channel.

🧰 Tools
🪛 Ruff (0.8.2)

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)


41-41: Add newline at end of file.

The file is missing a newline at the end, which is a common convention for source files.

Add a newline at the end of the file.

🧰 Tools
🪛 Ruff (0.8.2)

41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)


24-40: Fix whitespace in blank lines.

There are multiple blank lines containing whitespace throughout the file, which should be removed for consistency.

Remove whitespace from all blank lines in the file (lines 24, 26, 28, 37, 39, and 41).

🧰 Tools
🪛 Ruff (0.8.2)

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a17780 and 625cc0e.

⛔ Files ignored due to path filters (1)
  • backend/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • backend/pyproject.toml (1 hunks)
  • backend/tests/core/api/algolia_test.py (2 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1 hunks)
  • backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/pyproject.toml
🧰 Additional context used
🧬 Code Definitions (2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1) (1)
  • test_handler_responses (20-38)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
  • test_handler_responses (32-40)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)


28-28: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)


39-39: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: Blank line contains whitespace

Remove whitespace from blank line

(W293)


41-41: No newline at end of file

Add trailing newline

(W292)

backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py

34-34: Blank line contains whitespace

Remove whitespace from blank line

(W293)


37-37: Blank line contains whitespace

Remove whitespace from blank line

(W293)

Comment on lines 145 to 148
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Constrain strategy value ranges for page and hits_per_page.

Currently, the page and hits_per_page strategies allow any integer value, including negatives and zero, which are likely invalid inputs in the production code. The existing unit tests in this file (lines 107-110) show that validation is needed for these parameters.

-        page=strategies.integers(),
-        hits_per_page=strategies.integers(),
+        page=strategies.integers(min_value=1),
+        hits_per_page=strategies.integers(min_value=1, max_value=1000),

This change aligns with the validation being performed in the implementation and matches the constraints in the parametrized tests above.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
query=strategies.text(),
page=strategies.integers(),
hits_per_page=strategies.integers(),
facet_filters=strategies.lists(strategies.text()),
query=strategies.text(),
page=strategies.integers(min_value=1),
hits_per_page=strategies.integers(min_value=1, max_value=1000),
facet_filters=strategies.lists(strategies.text()),

Comment on lines 14 to 27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Simplify the dynamic import approach in test_check_gsoc_handler.

The current implementation uses a complex dynamic import approach with a fallback lambda. This makes the test harder to understand and may hide issues with the actual check_gsoc_handler function.

-    def test_check_gsoc_handler(self, channel_id):
-        gsoc_module = __import__(
-            "apps.slack.events.member_joined_channel.gsoc",
-            fromlist=["gsoc_handler"],
-        )
-        check_gsoc_handler = getattr(
-            gsoc_module,
-            "check_gsoc_handler",
-            lambda x: x.get("channel") == OWASP_GSOC_CHANNEL_ID,
-        )
-        
-        check_gsoc_handler({"channel": channel_id})
-        
-        assert True
+    def test_check_gsoc_handler(self, channel_id):
+        """Test that check_gsoc_handler correctly identifies the GSOC channel."""
+        from apps.slack.events.member_joined_channel.gsoc import check_gsoc_handler
+        
+        result = check_gsoc_handler({"channel": channel_id})
+        
+        # Verify the function returns True only for the GSOC channel
+        expected = (channel_id == OWASP_GSOC_CHANNEL_ID)
+        assert result == expected

This approach directly imports the function, tests it with the generated channel_id, and verifies it returns the expected result based on whether the channel matches the GSOC channel ID.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_check_gsoc_handler(self, channel_id):
gsoc_module = __import__(
"apps.slack.events.member_joined_channel.gsoc",
fromlist=["gsoc_handler"],
)
check_gsoc_handler = getattr(
gsoc_module,
"check_gsoc_handler",
lambda x: x.get("channel") == OWASP_GSOC_CHANNEL_ID,
)
check_gsoc_handler({"channel": channel_id})
assert True
def test_check_gsoc_handler(self, channel_id):
"""Test that check_gsoc_handler correctly identifies the GSOC channel."""
from apps.slack.events.member_joined_channel.gsoc import check_gsoc_handler
result = check_gsoc_handler({"channel": channel_id})
# Verify the function returns True only for the GSOC channel
expected = (channel_id == OWASP_GSOC_CHANNEL_ID)
assert result == expected
🧰 Tools
🪛 Ruff (0.8.2)

24-24: Blank line contains whitespace

Remove whitespace from blank line

(W293)


26-26: Blank line contains whitespace

Remove whitespace from blank line

(W293)

@ahmedxgouda
Copy link
Collaborator Author

@arkid15r Could you review the current changes before proceeding with the GraphQL endpoints?

@yashgoyal0110
Copy link
Contributor

is someone still working on this?

@ahmedxgouda
Copy link
Collaborator Author

is someone still working on this?

Yes, I just was discussing with the mentor. I will add the GraphQL tests tomorrow after resolving the new big conflicts :)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
backend/tests/apps/core/api/algolia_test.py (5)

143-149: Refine hypothesis strategies to use more realistic bounds.

The current strategies for generating test data are too broad and might produce extreme values that aren't representative of real API usage. This could lead to unnecessary test failures or miss important edge cases.

@given(
-    index_name=strategies.text(),
-    query=strategies.text(),
-    page=strategies.integers(),
-    hits_per_page=strategies.integers(),
+    index_name=strategies.text(min_size=1, max_size=100),
+    query=strategies.text(max_size=200),
+    page=strategies.integers(min_value=0, max_value=1000),
+    hits_per_page=strategies.integers(min_value=1, max_value=1000),
    facet_filters=strategies.lists(strategies.text()),
)

165-167: Assert additional properties for better test coverage.

The current assertion only checks the status code, which is a good start but doesn't fully validate the behavior of the function. For a more comprehensive test, we should also verify response content for both successful and error cases.

response = algolia_search(mock_request)

assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
+# Additional assertions based on status code
+response_data = json.loads(response.content)
+if response.status_code == requests.codes.ok:
+    assert "hits" in response_data
+    assert "nbPages" in response_data
+elif response.status_code == requests.codes.bad_request:
+    assert "error" in response_data

143-167: Consider adding assume statements to help Hypothesis generate valid inputs.

Hypothesis works more efficiently when it can avoid generating inputs that will always fail in uninteresting ways. Using hypothesis.assume() can help focus the test on more meaningful scenarios.

@given(
    index_name=strategies.text(),
    query=strategies.text(),
    page=strategies.integers(),
    hits_per_page=strategies.integers(),
    facet_filters=strategies.lists(strategies.text()),
)
def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
    """Test the algolia_search function with fuzz testing."""
+    # Skip tests with inputs that will always fail in the same way
+    from hypothesis import assume
+    assume(isinstance(index_name, str) and len(index_name) > 0)
+    assume(isinstance(query, str))
+    assume(isinstance(page, int) and page >= 0)
+    assume(isinstance(hits_per_page, int) and hits_per_page > 0)
+    assume(isinstance(facet_filters, list))
+    
    with patch(
        "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS
    ):
        mock_request = Mock()
        mock_request.method = "POST"
        mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
        mock_request.body = json.dumps(
            {
                "facetFilters": facet_filters,
                "hitsPerPage": hits_per_page,
                "indexName": index_name,
                "page": page,
                "query": query,
            }
        )

        response = algolia_search(mock_request)

        assert response.status_code in [requests.codes.ok, requests.codes.bad_request]

143-167: Consider testing JSON serialization errors.

The current test doesn't account for potential JSON serialization errors when complex objects are passed as parameters. Adding a test case for this scenario would improve test coverage.

Since this would require a separate test case that isn't directly part of the fuzz testing framework, here's a suggestion for an additional test method:

def test_algolia_search_invalid_json(self):
    """Test the scenario where the request body contains invalid JSON."""
    mock_request = Mock()
    mock_request.method = "POST"
    mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
    # Set body to invalid JSON
    mock_request.body = "{"

    response = algolia_search(mock_request)
    response_data = json.loads(response.content)

    assert response.status_code == requests.codes.bad_request
    assert "error" in response_data

143-167: Consider grouping test scenarios based on expected response.

The current implementation treats all inputs as potentially valid or invalid. A more structured approach would be to separate test cases based on expected outcome, which would allow for more specific assertions.

This could be implemented by creating separate fuzz tests for valid and invalid scenarios:

  1. A test for valid inputs that assumes or enforces valid parameter values and asserts an OK response.
  2. A test for invalid inputs that introduces specific validation failures and asserts a BAD_REQUEST response.

This approach would provide more focused testing and clearer failure messages, making it easier to maintain and debug the tests.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b078e92 and c0ed1f9.

⛔ Files ignored due to path filters (1)
  • backend/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • backend/tests/apps/core/api/algolia_test.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: CodeQL (javascript-typescript)
  • GitHub Check: Run frontend e2e tests

Comment on lines 143 to 167
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing mock for get_search_results function.

Unlike the other test methods in this class, this fuzz test doesn't mock the get_search_results function. Without mocking, the test could make actual external API calls, which would make the test less reliable and potentially slow.

@given(
    index_name=strategies.text(),
    query=strategies.text(),
    page=strategies.integers(),
    hits_per_page=strategies.integers(),
    facet_filters=strategies.lists(strategies.text()),
)
def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
    """Test the algolia_search function with fuzz testing."""
+    with patch(
+        "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS
+    ):
        mock_request = Mock()
        mock_request.method = "POST"
        mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
        mock_request.body = json.dumps(
            {
                "facetFilters": facet_filters,
                "hitsPerPage": hits_per_page,
                "indexName": index_name,
                "page": page,
                "query": query,
            }
        )

        response = algolia_search(mock_request)

        assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@given(
index_name=strategies.text(),
query=strategies.text(),
page=strategies.integers(),
hits_per_page=strategies.integers(),
facet_filters=strategies.lists(strategies.text()),
)
def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
"""Test the algolia_search function with fuzz testing."""
mock_request = Mock()
mock_request.method = "POST"
mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
mock_request.body = json.dumps(
{
"facetFilters": facet_filters,
"hitsPerPage": hits_per_page,
"indexName": index_name,
"page": page,
"query": query,
}
)
response = algolia_search(mock_request)
assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
@given(
index_name=strategies.text(),
query=strategies.text(),
page=strategies.integers(),
hits_per_page=strategies.integers(),
facet_filters=strategies.lists(strategies.text()),
)
def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
"""Test the algolia_search function with fuzz testing."""
with patch(
"apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS
):
mock_request = Mock()
mock_request.method = "POST"
mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
mock_request.body = json.dumps(
{
"facetFilters": facet_filters,
"hitsPerPage": hits_per_page,
"indexName": index_name,
"page": page,
"query": query,
}
)
response = algolia_search(mock_request)
assert response.status_code in [requests.codes.ok, requests.codes.bad_request]

@github-actions github-actions bot added backend docker Pull requests that update Docker code and removed nestbot-tests labels Apr 1, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
backend/docker/entrypoint.fuzz.sh (1)

10-11: Add CSRF token validation before creating config file.

The script fetches the CSRF token but doesn't validate it before writing to the config file. If the /csrf endpoint returns unexpected JSON or an empty token, the config will contain an invalid/empty token, causing graphqler to fail with obscure errors.

🔎 Proposed fix
 echo "Fetching CSRF token..."
-CSRF_TOKEN=$(curl -fsSL "$BASE_URL/csrf" | jq -r '.csrftoken')
+CSRF_TOKEN=$(curl -fsSL "$BASE_URL/csrf" | jq -r '.csrftoken' 2>/dev/null || echo "")
+
+if [ -z "$CSRF_TOKEN" ] || [ "$CSRF_TOKEN" = "null" ]; then
+  echo "Error: Failed to fetch valid CSRF token from backend." >&2
+  exit 1
+fi
🧹 Nitpick comments (5)
.gitignore (1)

49-51: Duplicate ignore entry.

Line 51 duplicates the .cursor/rules/snyk_rules.mdc entry already defined on line 6. Consider removing the duplicate.

🔎 Proposed fix
 
 # Snyk Security Extension - AI Rules (auto-generated)
-.cursor/rules/snyk_rules.mdc
docker-compose/fuzz.yaml (1)

44-49: Consider adding --clean flag or error handling for pg_restore.

The pg_restore command may fail if the backend has already applied migrations, as the schema/tables would already exist. Consider adding the --clean flag to drop objects before recreating them, or wrap with error handling to allow partial restores.

🔎 Proposed fix
     command: >
       sh -c '
         echo "Loading data from dump..." &&
-        pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB /data/nest.dump &&
+        pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB --clean --if-exists /data/nest.dump || true &&
         echo "Data loading completed."
       '

Note: Using || true allows the job to succeed even if some objects can't be dropped. Remove it if strict restore is required.

docker-compose/e2e.yaml (1)

28-49: Same pg_restore concern as in fuzz.yaml.

See the recommendation for docker-compose/fuzz.yaml regarding --clean flag or error handling for the pg_restore command.

.github/workflows/run-ci-cd.yaml (2)

327-332: Inconsistent action version pins.

The fuzz job uses different action SHAs compared to other jobs:

  • actions/checkout on line 329 uses @1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 while others use @8e8c483db84b4bee98b60c0593521ed34d9990e8
  • docker/setup-buildx-action on line 332 uses @e468171a9de216ec08956ac3ada2f0791b6bd435 while others use @8d2750c68a42422c14e847fe6c8ac0403b4cbd6f

Consider using consistent version pins across all jobs to ensure reproducible behavior.

🔎 Proposed fix
     steps:
       - name: Check out repository
-        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
+        uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8

       - name: Set up Docker buildx
-        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
+        uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f

363-367: Add error handling for pg_restore in CI.

Same as in the docker-compose files, pg_restore may fail if migrations have already created the schema. Consider adding --clean --if-exists flags or wrapping with || true if partial restore is acceptable.

🔎 Proposed fix
       - name: Load Postgres data
         env:
           PGPASSWORD: nest_user_fuzz_password
         run: |
-          pg_restore -h localhost -U nest_user_fuzz -d nest_db_fuzz < backend/data/nest.dump
+          pg_restore -h localhost -U nest_user_fuzz -d nest_db_fuzz --clean --if-exists backend/data/nest.dump || true
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2fcf6b9 and cefb150.

📒 Files selected for processing (7)
  • .github/workflows/run-ci-cd.yaml
  • .gitignore
  • backend/Makefile
  • backend/docker/entrypoint.fuzz.sh
  • docker-compose/e2e.yaml
  • docker-compose/fuzz.yaml
  • frontend/Makefile
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/Makefile
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-08-31T13:48:09.830Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:0-0
Timestamp: 2025-08-31T13:48:09.830Z
Learning: In the OWASP/Nest project, Django's GraphQL endpoint accepts CSRF tokens via 'x-csrftoken' header (lowercase) without requiring a Referer header, working fine in their configuration for GraphQL codegen introspection.

Applied to files:

  • backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-08-30T12:52:32.214Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:52-52
Timestamp: 2025-08-30T12:52:32.214Z
Learning: When using GraphQL Codegen with async configuration that needs to fetch data (like CSRF tokens), export the async function directly instead of using top-level await: `export default async function()` rather than `export default await function()`.

Applied to files:

  • backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • frontend/Makefile
  • .github/workflows/run-ci-cd.yaml
  • docker-compose/e2e.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run backend tests
  • GitHub Check: Run fuzz tests
  • GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (7)
.gitignore (1)

7-12: LGTM!

The .gitignore additions correctly exclude fuzzing artifacts while preserving the example environment file for fuzz testing.

backend/docker/entrypoint.fuzz.sh (1)

13-25: LGTM!

The config file creation and graphqler invocation are properly structured with correct quoting of variables. The MAX_TIME = 300 setting provides a reasonable 5-minute timeout for fuzz runs.

docker-compose/fuzz.yaml (1)

68-81: LGTM!

The graphql service correctly mounts fuzzing_results to persist fuzz test outputs, addressing the artifact persistence concern. The BASE_URL environment variable and backend dependency are properly configured.

docker-compose/e2e.yaml (1)

69-83: Consider adding dependency on data-loader completion.

The e2e-tests service depends only on backend being healthy, but the tests may require data loaded by data-loader. Since data-loader runs and exits, consider whether tests could start before data loading completes.

Note: If the Makefile orchestrates this by running data-loader first with --abort-on-container-exit, then running e2e-tests separately, this may be intentional. Verify the test execution flow handles this correctly.

frontend/Makefile (1)

59-66: LGTM!

The two-phase approach correctly orchestrates the E2E test flow:

  1. First phase loads data via data-loader with --abort-on-container-exit ensuring data loading completes before proceeding.
  2. Second phase runs the actual E2E tests.

The pre-cleanup of the database container and volume ensures a clean state for each test run.

.github/workflows/run-ci-cd.yaml (2)

306-312: Fuzz tests don't block staging/production deployments.

The run-fuzz-tests job runs independently and isn't included in the needs for build-staging-images (line 405-408) or build-production-images (line 732-736). Consider whether fuzz test failures should block deployments or if this is intentional (e.g., fuzz tests are advisory only).


390-395: No action required. actions/upload-artifact@v6 is the current latest stable version of this GitHub Action.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.github/workflows/run-ci-cd.yaml (1)

328-332: Inconsistent action version pins.

The actions/checkout and docker/setup-buildx-action versions in this job differ from those used elsewhere in the workflow:

Action This job Other jobs
actions/checkout 1af3b93... 8e8c483...
docker/setup-buildx-action e468171... 8d2750c...

Using consistent versions across the workflow simplifies maintenance and ensures predictable behavior.

🔎 Proposed fix
     steps:
       - name: Check out repository
-        uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3
+        uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8

       - name: Set up Docker buildx
-        uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
+        uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cefb150 and 7fb30f3.

📒 Files selected for processing (1)
  • .github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run fuzz tests
  • GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (3)
.github/workflows/run-ci-cd.yaml (3)

240-273: LGTM - E2E backend setup refactored cleanly.

The refactored E2E setup using the composite action and the new flow (setup → start → wait → load data) is well-structured. Migrations run before Gunicorn starts, and the 5-minute timeout for backend readiness is reasonable.


383-389: Container user permissions look correct.

The fuzz test container runs with --user $(id -u):$(id -g) and mounts the results directory, ensuring artifacts are created with the correct ownership for upload. This is a good practice.


306-312: Verify if fuzz tests should gate deployments.

The run-fuzz-tests job runs independently and doesn't block build-staging-images or build-production-images. This means fuzz test failures won't prevent deployments.

If fuzz tests uncover security issues, you may want them to gate releases. Consider whether build-staging-images should include run-fuzz-tests in its needs list.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/workflows/run-ci-cd.yaml (1)

406-410: Add fuzz tests as a deployment gate.

The run-fuzz-tests job is not included in the needs dependencies of build-staging-images or build-production-images. This means deployments will proceed even if fuzz tests discover crashes, unhandled exceptions, or security vulnerabilities—defeating the purpose of CI/CD integration.

Per the acceptance criteria in issue #1075, fuzz tests should block deployments when they capture unexpected behavior.

🔎 Proposed fix

Add run-fuzz-tests to the needs array of both build jobs:

   build-staging-images:
     name: Build Staging Images
     env:
       RELEASE_VERSION: ${{ needs.set-release-version.outputs.release_version }}
     environment: staging
     if: |
       github.repository == 'OWASP/Nest' &&
       github.ref == 'refs/heads/main'
     needs:
       - run-backend-tests
       - run-frontend-e2e-tests
       - run-frontend-unit-tests
+      - run-fuzz-tests
       - set-release-version
   build-production-images:
     name: Build Production Images
     env:
       RELEASE_VERSION: ${{ needs.set-release-version.outputs.release_version }}
     environment: production
     if: |
       github.event_name == 'release' &&
       github.event.action == 'published'
     needs:
       - run-backend-tests
       - run-frontend-e2e-tests
       - run-frontend-unit-tests
+      - run-fuzz-tests
       - set-release-version

Also applies to: 734-737

♻️ Duplicate comments (1)
.github/workflows/run-ci-cd.yaml (1)

391-396: Pin artifact action SHA and ensure upload on failure.

The artifact upload step should run even when fuzz tests fail (that's when results are most valuable) and should use a pinned SHA consistent with the workflow's security posture.

🔎 Proposed fix
       - name: Upload fuzzing results
+        if: always()
-        uses: actions/upload-artifact@v6
+        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
         with:
           name: fuzzing-results
           path: fuzzing_results/
           retention-days: 30
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7fb30f3 and 8444288.

📒 Files selected for processing (1)
  • .github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run fuzz tests
  • GitHub Check: Run backend tests

@ahmedxgouda
Copy link
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 25, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
.github/workflows/run-ci-cd.yaml (1)

391-396: Add if: always() to ensure artifacts are uploaded on failure.

Fuzz test results are most valuable when tests fail or find issues. Without if: always(), the upload step is skipped if the fuzz tests step fails, losing potentially critical findings.

🔎 Proposed fix
       - name: Upload fuzzing results
+        if: always()
         uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f
         with:
           name: fuzzing-results
           path: fuzzing_results/
           retention-days: 30
🧹 Nitpick comments (2)
.github/workflows/run-ci-cd.yaml (2)

246-257: Redundant port mapping with --network host.

When using --network host, the container shares the host's network namespace directly, making the -p 9000:9000 flag unnecessary. The port is already accessible on the host.

🔎 Proposed fix
       - name: Start Backend in the background
         run: |
           docker run -d --rm --name e2e-nest-backend \
             --env-file backend/.env.e2e.example \
             --network host \
             -e DJANGO_DB_HOST=localhost \
-            -p 9000:9000 \
             owasp/nest:test-backend-latest \
             sh -c '
               python manage.py migrate &&
               gunicorn wsgi:application --bind 0.0.0.0:9000
           '

340-351: Redundant port mapping with --network host.

Same as the E2E job: the -p 9500:9500 flag is unnecessary when using --network host.

🔎 Proposed fix
       - name: Run backend with fuzz environment variables
         run: |
           docker run -d --rm --name fuzz-nest-backend \
             --env-file backend/.env.fuzz.example \
             --network host \
             -e DJANGO_DB_HOST=localhost \
-            -p 9500:9500 \
             owasp/nest:test-backend-latest \
             sh -c '
               python manage.py migrate &&
               gunicorn wsgi:application --bind 0.0.0.0:9500
             '
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f79b403 and 6c2d0f0.

📒 Files selected for processing (1)
  • .github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
🔇 Additional comments (2)
.github/workflows/run-ci-cd.yaml (2)

306-326: LGTM on job structure.

The fuzz testing job is well-structured with appropriate timeout (15 minutes), proper Postgres service configuration with health checks, and correct dependencies on scan jobs.


369-389: LGTM on fuzz image build and test execution.

The build step uses appropriate cache settings, and the volume mount for fuzzing results ensures results are persisted to the host for artifact upload.

coderabbitai[bot]
coderabbitai bot previously requested changes Dec 26, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
docker-compose/e2e.yaml (1)

71-85: Missing dependency on data-loader may cause tests to run before data is loaded.

The e2e-tests service depends only on backend being healthy, but data-loader is responsible for populating the database with test data. Without a dependency on data-loader, there's a race condition where tests could start executing before the dump is fully restored.

Consider adding data-loader with service_completed_successfully condition:

🔎 Proposed fix
   e2e-tests:
     container_name: e2e-nest-tests
     build:
       context: ../frontend
       dockerfile: docker/Dockerfile.e2e.test
     command: >
       sh -c '
         pnpm run test:e2e
       '
     depends_on:
       backend:
         condition: service_healthy
+      data-loader:
+        condition: service_completed_successfully
     env_file: ../frontend/.env.e2e.example
     networks:
       - e2e-nest-network
♻️ Duplicate comments (2)
backend/Makefile (2)

166-172: Two-phase approach correctly sequences data loading before fuzzing.

The two-phase docker compose up approach ensures data is loaded before GraphQL fuzzing begins. The first command loads data, and the second runs the fuzzer. This works around the dependency issue at the docker-compose level.

Note: COMPOSE_BAKE=true was flagged in a previous review as an unused variable that has no effect on Docker Compose.


136-138: Pattern is consistent with run-backend-e2e.

The run-backend-fuzz target follows the same pattern as run-backend-e2e. Note that COMPOSE_BAKE=true was previously flagged as unused.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6c2d0f0 and 6f15af2.

📒 Files selected for processing (3)
  • backend/Makefile
  • docker-compose/e2e.yaml
  • docker-compose/fuzz.yaml
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: Ensure Redis image versions stay in sync across all environments by updating every relevant YAML file together (docker-compose files and CI/CD workflow configurations). When upgrading Redis, bump the image version in all docker-compose files and in all CI workflow YAMLs in one coordinated change to maintain parity across production, staging, local, E2E, and CI tests.

Applied to files:

  • docker-compose/e2e.yaml
  • docker-compose/fuzz.yaml
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • docker-compose/e2e.yaml
  • backend/Makefile
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: For the OWASP/Nest repository, Redis image versions should remain consistent across all environments (production, staging, local, E2E, and CI/CD E2E tests). When upgrading Redis, update all docker-compose files and CI/CD workflow configurations together to maintain environment parity.

Applied to files:

  • backend/Makefile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Run backend tests
  • GitHub Check: Run frontend unit tests
  • GitHub Check: Run frontend e2e tests
  • GitHub Check: Run fuzz tests
🔇 Additional comments (5)
docker-compose/e2e.yaml (2)

30-51: Data loader service looks good; consider adding error verbosity.

The data-loader service correctly sequences after both db and backend are healthy before restoring the dump. Consider adding --verbose to pg_restore for better debugging in CI, and possibly || exit 1 after echo to ensure any silent failures are caught (though sh -c with && should propagate errors).


4-8: LGTM!

Running migrations before starting Gunicorn ensures the database schema is consistent with the application code at startup.

docker-compose/fuzz.yaml (2)

80-81: Good: Volume mount for fuzzing results is now configured.

The volume mount ../backend/fuzzing_results:/home/owasp/fuzzing_results addresses the previously flagged concern about persisting GraphQL fuzzing artifacts. Results will now be available on the host for analysis and CI artifact collection.


1-69: LGTM!

Good isolation from the e2e environment with distinct ports (9500 for backend, 5434 for DB), separate network (fuzz-nest-network), and dedicated volume (fuzz-db-data). The service startup sequencing with health checks is correctly configured.

backend/Makefile (1)

32-60: LGTM!

The new exec-*-command targets follow the established pattern with -i variants for non-interactive use and -it variants for interactive terminal sessions. Based on learnings, the -it flags are appropriate for local development and debugging.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/run-ci-cd.yaml (1)

385-389: Consider using more restrictive permissions for the results directory.

The chmod -R 777 grants write access to all users. While this is functionally safe on ephemeral GitHub Actions runners, 755 or 775 would be sufficient since the container should run as a specific user (likely owasp based on the volume mount path).

🔎 Suggested refinement
       - name: Run backend fuzz tests
         run: |
           mkdir -p ${{ github.workspace }}/fuzzing_results &&
-          chmod -R 777 ${{ github.workspace }}/fuzzing_results &&
+          chmod -R 755 ${{ github.workspace }}/fuzzing_results &&
           docker run -e BASE_URL=http://localhost:9500 --network host \
           -v ${{ github.workspace }}/fuzzing_results:/home/owasp/fuzzing_results \
           owasp/nest:test-fuzz-backend-latest

This tightens permissions while still allowing the container to write results. If the container user needs group write access, use 775 instead.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6f15af2 and 9886459.

📒 Files selected for processing (2)
  • .github/workflows/run-ci-cd.yaml
  • backend/docker/entrypoint.fuzz.sh
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.
📚 Learning: 2025-12-26T06:57:19.911Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.

Applied to files:

  • backend/docker/entrypoint.fuzz.sh
  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-08-31T13:48:09.830Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:0-0
Timestamp: 2025-08-31T13:48:09.830Z
Learning: In the OWASP/Nest project, Django's GraphQL endpoint accepts CSRF tokens via 'x-csrftoken' header (lowercase) without requiring a Referer header, working fine in their configuration for GraphQL codegen introspection.

Applied to files:

  • backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-08-30T12:52:32.214Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:52-52
Timestamp: 2025-08-30T12:52:32.214Z
Learning: When using GraphQL Codegen with async configuration that needs to fetch data (like CSRF tokens), export the async function directly instead of using top-level await: `export default async function()` rather than `export default await function()`.

Applied to files:

  • backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: For the OWASP/Nest repository, Redis image versions should remain consistent across all environments (production, staging, local, E2E, and CI/CD E2E tests). When upgrading Redis, update all docker-compose files and CI/CD workflow configurations together to maintain environment parity.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.

Applied to files:

  • .github/workflows/run-ci-cd.yaml
🧬 Code graph analysis (1)
backend/docker/entrypoint.fuzz.sh (1)
backend/apps/ai/agent/agent.py (1)
  • run (27-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run CI Denendencies Scan
  • GitHub Check: Run Code Scan
🔇 Additional comments (3)
backend/docker/entrypoint.fuzz.sh (1)

1-30: LGTM! Robust fuzz entrypoint implementation.

The script properly validates the environment, fetches and validates the CSRF token with error handling, and configures the fuzzer with appropriate headers. All previous concerns have been addressed:

  • BASE_URL validation prevents misconfiguration
  • CSRF fetch uses curl -f to fail on HTTP errors
  • Token validation catches empty or null responses
  • Variables are properly quoted in the final command

The implementation is clean and follows shell scripting best practices.

.github/workflows/run-ci-cd.yaml (2)

240-273: LGTM! Well-structured E2E backend setup.

The new backend environment setup, startup sequence, and data loading flow is clean and robust:

  • Migrations run before the backend starts, preventing schema mismatch issues
  • Readiness polling with a reasonable 5-minute timeout ensures the backend is fully operational
  • Data loading via pg_restore is appropriate for the PostgreSQL-based E2E environment

This pattern is well-suited for reuse in the fuzz job.


306-397: LGTM! Comprehensive fuzz testing integration.

The fuzz job is well-implemented and mirrors the E2E job structure appropriately:

  • Includes a 15-minute timeout to prevent indefinite execution
  • Runs migrations before starting the backend
  • Polls for backend readiness with proper timeout
  • Loads test data via pg_restore before fuzzing
  • Uses correct image tags and cache references (matching the update-nest-test-images workflow)
  • Uploads artifacts with if: always() to capture results even on failure
  • Volume mount enables result persistence from the ephemeral container

All previous review concerns have been addressed. The integration follows CI/CD best practices and provides good observability via artifact uploads.

@arkid15r arkid15r force-pushed the feature/e2e-backend branch from de86aab to 0159b0d Compare December 29, 2025 19:51
@sonarqubecloud
Copy link

@arkid15r arkid15r enabled auto-merge (squash) December 29, 2025 20:17
@arkid15r arkid15r merged commit 7264151 into OWASP:feature/e2e-backend Dec 29, 2025
25 checks passed
arkid15r added a commit that referenced this pull request Jan 16, 2026
* Add hypothesis to poetry and fuzz test the index endpoint.

* Refactor import statements in algolia_test.py for consistency

* Add the client ip address to the request META.

* Add fuzz testing for Slack event handlers and refactor algolia fuzz testing.

* Refactor fuzz tests for Slack event handlers to improve readability and structure

* Fix the poetry lock file.

* Remove fuzz testing from algolia_search unit tests

* Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models.

* Refactor to improve quality

* Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration

* Update poetry.lock to reflect dependency changes and version updates

* Create a fuzz configuration, update docker file, makefile, and the tests.

* Refactor fuzz configuration by reorganizing imports and cleaning up whitespace

* Update Dockerfile and entrypoint script to use Alpine base image and shell

* Run the server on port 8000 after the tests.

* Create a docker compose file for fuzz testing.

* Add 'graphqler' to custom dictionary

* Load data from nest.json and add graphqler to cspell dict.

* Remove model-bakery dependency from pyproject.toml and update poetry.lock

* Update graphqler command in docker compose and the healthcheck

* Update graphql command to use backend service URL in docker-compose

* Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests

* Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose

* Update fuzz-test-backend command to abort on container exit

* Add fuzz testing workflow and update image build steps

* Add .env file creation step for fuzz tests in CI/CD workflow

* Add Docker Hub login step for fuzz tests in CI/CD workflow

* Refactor for the checks

* Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling

* Fix fuzz tests workflow: rename docker-compose file

* Refactor fuzz-tests job.

* Add environment variables for fuzz tests configuration

* Update fuzz tests environment variables

* Fix poetry lock file

* Sort the custom-dict.

* Update content hash in poetry.lock

* Add docker cache mounts to the backend image

* Add Redis configuration

* refactor yaml

* Add docker cache mounts to graphql file

* Remove unnecessary chmod command for cache directories in Dockerfile

* Fix poetry lock file

* Add cache mounts to backend tests

* Update cache mounts in graphql image

* Update mkdir in graphql image

* Remove duplicates

* Update tests

* Rename docker compose

* Update poetry lock

* Apply sonar

* Migrate to OWASP repo

* Update docker

* Use graphqler maintainer docker image

* Add disable permissions, update docker compose, and update entrypoint

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Update code for e2e

* Add runs-on

* Skip sonar and fix ci/cd

* Apply rabbit suggestion and override entrypoint in ci/cd

* Use env with csrf

* Add timeout

* Remove hypothesis and old test files

* Apply rabbit's suggestions

* Update ci/cd and makefile

* Use digest pinning with graphqler image

* Update dockerfile and fix the typeerror issue

* Apply sonar suggestion

* Apply sonar and rabbit suggestions

* Remove cache from ci/cd

* Use curl instead of wget

* Separate e2e from fuzz

* Update fuzz ci/cd

* Update CI/CD

* Run precommit

* Update code

* Update code

* Update docs, ci/cd, and apply suggestions

* Use digest pinning and parameters in workflow

* Apply sonar suggestions

* Apply rabbit suggestions

* Run migrations for fuzz testing in ci/cd

* Apply rabbit suggestions

* Fix exceptions

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Add backend/data/nest.sql.gz

* Update code

* Automate data loading

* Update dump_data to avoid exceptions

* Update dump_data and automated data_loading

* Update CI/CD

* Update tests

* Add timeout for fuzz tests

* Update timeout for fuzz

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Apply rabbit's suggestions

* Update backend/Makefile

* Update make targets and docker compose

* Add volume for graphql fuzzing results and add upload artifacts in ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update docker compose and makefile

* Apply rabbit's suggestions

* Update dump to match the last nest.json.gz

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
arkid15r added a commit that referenced this pull request Jan 16, 2026
* Add hypothesis to poetry and fuzz test the index endpoint.

* Refactor import statements in algolia_test.py for consistency

* Add the client ip address to the request META.

* Add fuzz testing for Slack event handlers and refactor algolia fuzz testing.

* Refactor fuzz tests for Slack event handlers to improve readability and structure

* Fix the poetry lock file.

* Remove fuzz testing from algolia_search unit tests

* Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models.

* Refactor to improve quality

* Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration

* Update poetry.lock to reflect dependency changes and version updates

* Create a fuzz configuration, update docker file, makefile, and the tests.

* Refactor fuzz configuration by reorganizing imports and cleaning up whitespace

* Update Dockerfile and entrypoint script to use Alpine base image and shell

* Run the server on port 8000 after the tests.

* Create a docker compose file for fuzz testing.

* Add 'graphqler' to custom dictionary

* Load data from nest.json and add graphqler to cspell dict.

* Remove model-bakery dependency from pyproject.toml and update poetry.lock

* Update graphqler command in docker compose and the healthcheck

* Update graphql command to use backend service URL in docker-compose

* Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests

* Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose

* Update fuzz-test-backend command to abort on container exit

* Add fuzz testing workflow and update image build steps

* Add .env file creation step for fuzz tests in CI/CD workflow

* Add Docker Hub login step for fuzz tests in CI/CD workflow

* Refactor for the checks

* Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling

* Fix fuzz tests workflow: rename docker-compose file

* Refactor fuzz-tests job.

* Add environment variables for fuzz tests configuration

* Update fuzz tests environment variables

* Fix poetry lock file

* Sort the custom-dict.

* Update content hash in poetry.lock

* Add docker cache mounts to the backend image

* Add Redis configuration

* refactor yaml

* Add docker cache mounts to graphql file

* Remove unnecessary chmod command for cache directories in Dockerfile

* Fix poetry lock file

* Add cache mounts to backend tests

* Update cache mounts in graphql image

* Update mkdir in graphql image

* Remove duplicates

* Update tests

* Rename docker compose

* Update poetry lock

* Apply sonar

* Migrate to OWASP repo

* Update docker

* Use graphqler maintainer docker image

* Add disable permissions, update docker compose, and update entrypoint

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Update code for e2e

* Add runs-on

* Skip sonar and fix ci/cd

* Apply rabbit suggestion and override entrypoint in ci/cd

* Use env with csrf

* Add timeout

* Remove hypothesis and old test files

* Apply rabbit's suggestions

* Update ci/cd and makefile

* Use digest pinning with graphqler image

* Update dockerfile and fix the typeerror issue

* Apply sonar suggestion

* Apply sonar and rabbit suggestions

* Remove cache from ci/cd

* Use curl instead of wget

* Separate e2e from fuzz

* Update fuzz ci/cd

* Update CI/CD

* Run precommit

* Update code

* Update code

* Update docs, ci/cd, and apply suggestions

* Use digest pinning and parameters in workflow

* Apply sonar suggestions

* Apply rabbit suggestions

* Run migrations for fuzz testing in ci/cd

* Apply rabbit suggestions

* Fix exceptions

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Add backend/data/nest.sql.gz

* Update code

* Automate data loading

* Update dump_data to avoid exceptions

* Update dump_data and automated data_loading

* Update CI/CD

* Update tests

* Add timeout for fuzz tests

* Update timeout for fuzz

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Apply rabbit's suggestions

* Update backend/Makefile

* Update make targets and docker compose

* Add volume for graphql fuzzing results and add upload artifacts in ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update docker compose and makefile

* Apply rabbit's suggestions

* Update dump to match the last nest.json.gz

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
arkid15r added a commit that referenced this pull request Jan 19, 2026
* Add hypothesis to poetry and fuzz test the index endpoint.

* Refactor import statements in algolia_test.py for consistency

* Add the client ip address to the request META.

* Add fuzz testing for Slack event handlers and refactor algolia fuzz testing.

* Refactor fuzz tests for Slack event handlers to improve readability and structure

* Fix the poetry lock file.

* Remove fuzz testing from algolia_search unit tests

* Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models.

* Refactor to improve quality

* Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration

* Update poetry.lock to reflect dependency changes and version updates

* Create a fuzz configuration, update docker file, makefile, and the tests.

* Refactor fuzz configuration by reorganizing imports and cleaning up whitespace

* Update Dockerfile and entrypoint script to use Alpine base image and shell

* Run the server on port 8000 after the tests.

* Create a docker compose file for fuzz testing.

* Add 'graphqler' to custom dictionary

* Load data from nest.json and add graphqler to cspell dict.

* Remove model-bakery dependency from pyproject.toml and update poetry.lock

* Update graphqler command in docker compose and the healthcheck

* Update graphql command to use backend service URL in docker-compose

* Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests

* Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose

* Update fuzz-test-backend command to abort on container exit

* Add fuzz testing workflow and update image build steps

* Add .env file creation step for fuzz tests in CI/CD workflow

* Add Docker Hub login step for fuzz tests in CI/CD workflow

* Refactor for the checks

* Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling

* Fix fuzz tests workflow: rename docker-compose file

* Refactor fuzz-tests job.

* Add environment variables for fuzz tests configuration

* Update fuzz tests environment variables

* Fix poetry lock file

* Sort the custom-dict.

* Update content hash in poetry.lock

* Add docker cache mounts to the backend image

* Add Redis configuration

* refactor yaml

* Add docker cache mounts to graphql file

* Remove unnecessary chmod command for cache directories in Dockerfile

* Fix poetry lock file

* Add cache mounts to backend tests

* Update cache mounts in graphql image

* Update mkdir in graphql image

* Remove duplicates

* Update tests

* Rename docker compose

* Update poetry lock

* Apply sonar

* Migrate to OWASP repo

* Update docker

* Use graphqler maintainer docker image

* Add disable permissions, update docker compose, and update entrypoint

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Update code for e2e

* Add runs-on

* Skip sonar and fix ci/cd

* Apply rabbit suggestion and override entrypoint in ci/cd

* Use env with csrf

* Add timeout

* Remove hypothesis and old test files

* Apply rabbit's suggestions

* Update ci/cd and makefile

* Use digest pinning with graphqler image

* Update dockerfile and fix the typeerror issue

* Apply sonar suggestion

* Apply sonar and rabbit suggestions

* Remove cache from ci/cd

* Use curl instead of wget

* Separate e2e from fuzz

* Update fuzz ci/cd

* Update CI/CD

* Run precommit

* Update code

* Update code

* Update docs, ci/cd, and apply suggestions

* Use digest pinning and parameters in workflow

* Apply sonar suggestions

* Apply rabbit suggestions

* Run migrations for fuzz testing in ci/cd

* Apply rabbit suggestions

* Fix exceptions

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Add backend/data/nest.sql.gz

* Update code

* Automate data loading

* Update dump_data to avoid exceptions

* Update dump_data and automated data_loading

* Update CI/CD

* Update tests

* Add timeout for fuzz tests

* Update timeout for fuzz

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Apply rabbit's suggestions

* Update backend/Makefile

* Update make targets and docker compose

* Add volume for graphql fuzzing results and add upload artifacts in ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update docker compose and makefile

* Apply rabbit's suggestions

* Update dump to match the last nest.json.gz

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
arkid15r added a commit that referenced this pull request Jan 19, 2026
* Add hypothesis to poetry and fuzz test the index endpoint.

* Refactor import statements in algolia_test.py for consistency

* Add the client ip address to the request META.

* Add fuzz testing for Slack event handlers and refactor algolia fuzz testing.

* Refactor fuzz tests for Slack event handlers to improve readability and structure

* Fix the poetry lock file.

* Remove fuzz testing from algolia_search unit tests

* Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models.

* Refactor to improve quality

* Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration

* Update poetry.lock to reflect dependency changes and version updates

* Create a fuzz configuration, update docker file, makefile, and the tests.

* Refactor fuzz configuration by reorganizing imports and cleaning up whitespace

* Update Dockerfile and entrypoint script to use Alpine base image and shell

* Run the server on port 8000 after the tests.

* Create a docker compose file for fuzz testing.

* Add 'graphqler' to custom dictionary

* Load data from nest.json and add graphqler to cspell dict.

* Remove model-bakery dependency from pyproject.toml and update poetry.lock

* Update graphqler command in docker compose and the healthcheck

* Update graphql command to use backend service URL in docker-compose

* Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests

* Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose

* Update fuzz-test-backend command to abort on container exit

* Add fuzz testing workflow and update image build steps

* Add .env file creation step for fuzz tests in CI/CD workflow

* Add Docker Hub login step for fuzz tests in CI/CD workflow

* Refactor for the checks

* Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling

* Fix fuzz tests workflow: rename docker-compose file

* Refactor fuzz-tests job.

* Add environment variables for fuzz tests configuration

* Update fuzz tests environment variables

* Fix poetry lock file

* Sort the custom-dict.

* Update content hash in poetry.lock

* Add docker cache mounts to the backend image

* Add Redis configuration

* refactor yaml

* Add docker cache mounts to graphql file

* Remove unnecessary chmod command for cache directories in Dockerfile

* Fix poetry lock file

* Add cache mounts to backend tests

* Update cache mounts in graphql image

* Update mkdir in graphql image

* Remove duplicates

* Update tests

* Rename docker compose

* Update poetry lock

* Apply sonar

* Migrate to OWASP repo

* Update docker

* Use graphqler maintainer docker image

* Add disable permissions, update docker compose, and update entrypoint

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Update code for e2e

* Add runs-on

* Skip sonar and fix ci/cd

* Apply rabbit suggestion and override entrypoint in ci/cd

* Use env with csrf

* Add timeout

* Remove hypothesis and old test files

* Apply rabbit's suggestions

* Update ci/cd and makefile

* Use digest pinning with graphqler image

* Update dockerfile and fix the typeerror issue

* Apply sonar suggestion

* Apply sonar and rabbit suggestions

* Remove cache from ci/cd

* Use curl instead of wget

* Separate e2e from fuzz

* Update fuzz ci/cd

* Update CI/CD

* Run precommit

* Update code

* Update code

* Update docs, ci/cd, and apply suggestions

* Use digest pinning and parameters in workflow

* Apply sonar suggestions

* Apply rabbit suggestions

* Run migrations for fuzz testing in ci/cd

* Apply rabbit suggestions

* Fix exceptions

* Establish an e2e backend instance locally and in CI/CD (#2429)

* Add dockerfile and compose and update makefile

* Dump db-data

* Reorder makefile

* Dump db-data

* Remove e2e dockerfile

* Add CI/CD

* Apply rabbit's suggestions

* Add postgres instead of pgvector

* Remove needs

* Update envs

* Fix migrations step

* Add envs to docker

* Remove migrations step

* Remove --without test from dockerfile

* Copy tests in dockerfile and add needs to gh workflow

* Update dockerfile

* Apply suggestion

* Use the e2e instance in the frontend e2e tests

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD and dockerfile

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Restore needs for the job

* Update Makefiles

* Update docs

* Apply suggestions

* Update load-data

* Skip sonar error

* update port numbers

* Update docs

* Update code

* Add required shell property back

* Bump config.webServer timeout

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Fix running e2e backend (#2710)

* Add DB environment variables to e2e.yaml and add csrf_decorate function

* Skip sonar suggestion

* Fix rest api internal error

* Add timeout

* Update docs

* Update code

* Revert csrf update

* Add command to dump local data

* Update dump and load data

* Update rest api config and docs

* Apply check-spelling

* Use .env.e2e.example for frontend e2e tests in gh actions

* Apply rabbit's suggestions

* Migrate dump_data to django command and dump owasp, github, and slack tables only

* Apply rabbit's suggestions

* Update code

* Refactor dump_data

* Use local cache for e2e

* Remove old load_data command

* Add tests

* Skip sonar

* Apply rabbit suggestions

* Update code

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>

* Add backend/data/nest.sql.gz

* Update code

* Automate data loading

* Update dump_data to avoid exceptions

* Update dump_data and automated data_loading

* Update CI/CD

* Update tests

* Add timeout for fuzz tests

* Update timeout for fuzz

* Update CI/CD

* Update CI/CD

* Update CI/CD

* Apply rabbit's suggestions

* Update backend/Makefile

* Update make targets and docker compose

* Add volume for graphql fuzzing results and add upload artifacts in ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update ci/cd

* Update docker compose and makefile

* Apply rabbit's suggestions

* Update dump to match the last nest.json.gz

---------

Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
@ahmedxgouda ahmedxgouda deleted the feature/fuzz-testing branch January 21, 2026 09:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend backend-tests ci docker Pull requests that update Docker code docs Improvements or additions to documentation frontend makefile

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement fuzz testing

3 participants

Comments