Implement fuzztesting.#1139
Conversation
Summary by CodeRabbitRelease Notes
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThis pull request implements comprehensive fuzz testing infrastructure for the OWASP Nest backend, including new Makefile targets, Docker Compose services, GitHub Actions workflow jobs, Django settings configuration, and GraphQL schema refinements. Changes span backend configuration, CI/CD pipelines, data orchestration, and model/node updates. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🧠 Learnings (5)📓 Common learnings📚 Learning: 2025-12-26T06:57:19.911ZApplied to files:
📚 Learning: 2025-10-26T12:50:50.512ZApplied to files:
📚 Learning: 2025-12-26T06:08:58.549ZApplied to files:
📚 Learning: 2025-12-21T19:03:59.068ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
🔇 Additional comments (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
backend/tests/core/api/algolia_test.py (2)
143-149: Consider adding assumptions to narrow down test inputs.The current strategies generate completely random text for
index_nameandquery, which might lead to many test failures for invalid inputs. Since we're testing that the function handles inputs properly (not crashes), we could add assumptions to focus on more realistic inputs.Consider adding assumptions or using more specific strategies:
@hypothesis.given( - index_name=hypothesis.strategies.text(), + index_name=hypothesis.strategies.sampled_from(["projects", "chapters", "users", "committees", "issues"]), query=hypothesis.strategies.text(), page=hypothesis.strategies.integers(min_value=1), hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000), facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()), ) + @hypothesis.assume(hypothesis.strategies.text(min_size=1)) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):This would focus testing on valid index names from the existing test cases while still testing random values for other parameters.
143-166: Add mocking for get_search_results to isolate the test.The current test calls the actual
algolia_searchfunction without mocking its dependencies, which makes it an integration test rather than a unit test. For a proper unit test, we should mock theget_search_resultsfunction.Add mocking to isolate the test:
@hypothesis.given( index_name=hypothesis.strategies.text(), query=hypothesis.strategies.text(), page=hypothesis.strategies.integers(min_value=1), hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000), facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()), ) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): """Test the algolia_search function with fuzz testing.""" + with patch( + "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS + ): mock_request = Mock() mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} mock_request.method = "POST" mock_request.body = json.dumps( { "facetFilters": facet_filters, "hitsPerPage": hits_per_page, "indexName": index_name, "page": page, "query": query, } ) response = algolia_search(mock_request) assert response.status_code != requests.codes.server_errorThis would test only the
algolia_searchfunction's input validation and error handling, not the actual search functionality.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
backend/pyproject.toml(1 hunks)backend/tests/core/api/algolia_test.py(2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
algolia_search(44-90)
🪛 GitHub Actions: Run CI/CD
backend/pyproject.toml
[warning] 1-1: The top-level linter settings are deprecated in favour of their counterparts in the lint section. Please update the following options in backend/pyproject.toml: - 'per-file-ignores' -> 'lint.per-file-ignores'
[warning] 1-1: one-blank-line-before-class (D203) and no-blank-line-before-class (D211) are incompatible. Ignoring one-blank-line-before-class.
[warning] 1-1: multi-line-summary-first-line (D212) and multi-line-summary-second-line (D213) are incompatible. Ignoring multi-line-summary-second-line.
backend/tests/core/api/algolia_test.py
[error] 1-1: Ruff: Found 1 error (1 fixed, 0 remaining).
🔇 Additional comments (2)
backend/pyproject.toml (1)
53-53: Good addition of hypothesis for fuzz testing.Adding the Hypothesis library as a development dependency is appropriate for implementing fuzz testing. The version constraint (
^6.129.4) follows the project's versioning pattern.backend/tests/core/api/algolia_test.py (1)
6-6: Appropriate import for the hypothesis library.This import is necessary for the fuzz testing implementation below.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
backend/tests/core/api/algolia_test.py (3)
143-167: Consider additional strategies for more comprehensive testingWhile the current implementation is solid, you might consider enhancing it with additional strategies to test more edge cases:
- Test with empty strings and very long strings
- Test with special characters that might cause issues in JSON processing
- Consider adding a separate test case for malformed JSON
Here's how you could enhance the test:
@hypothesis.given( index_name=hypothesis.strategies.text(), query=hypothesis.strategies.text(), page=hypothesis.strategies.integers(min_value=1), hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000), facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()), ) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): """Test the algolia_search function with fuzz testing.""" mock_request = Mock() mock_request.method = "POST" mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} mock_request.body = json.dumps( { "facetFilters": facet_filters, "hitsPerPage": hits_per_page, "indexName": index_name, "page": page, "query": query, } ) response = algolia_search(mock_request) assert response.status_code in [requests.codes.ok, requests.codes.bad_request] + + @hypothesis.given( + malformed_json=hypothesis.strategies.text(), + ) + def test_fuzz_algolia_search_malformed_json(self, malformed_json): + """Test the algolia_search function with malformed JSON.""" + mock_request = Mock() + mock_request.method = "POST" + mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} + mock_request.body = malformed_json + + response = algolia_search(mock_request) + + # Even with malformed JSON, we should never see a server error + assert response.status_code != requests.codes.server_error
143-167: Consider adding explicit patching of external dependenciesThe current test might make actual API calls if the validation passes. Consider patching external dependencies to ensure the test remains isolated and fast.
Here's how you could modify the test:
@hypothesis.given( index_name=hypothesis.strategies.text(), query=hypothesis.strategies.text(), page=hypothesis.strategies.integers(min_value=1), hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000), facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()), ) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): """Test the algolia_search function with fuzz testing.""" + with patch("apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS): mock_request = Mock() mock_request.method = "POST" mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} mock_request.body = json.dumps( { "facetFilters": facet_filters, "hitsPerPage": hits_per_page, "indexName": index_name, "page": page, "query": query, } ) response = algolia_search(mock_request) assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
143-167: Consider adding documentation about fuzz testingTo help other developers understand the purpose and benefits of fuzz testing, consider adding more detailed documentation in the test docstring.
@hypothesis.given( index_name=hypothesis.strategies.text(), query=hypothesis.strategies.text(), page=hypothesis.strategies.integers(min_value=1), hits_per_page=hypothesis.strategies.integers(min_value=1, max_value=1000), facet_filters=hypothesis.strategies.lists(hypothesis.strategies.text()), ) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): - """Test the algolia_search function with fuzz testing.""" + """Test the algolia_search function with fuzz testing. + + This test uses the Hypothesis library to generate random inputs for the algolia_search + function. It verifies that the function does not return a server error (500) regardless + of the input provided. For valid inputs, it should return 200 OK, and for invalid inputs, + it should return 400 Bad Request. + + The test generates: + - Random text for index_name and query + - Positive integers for page + - Integers between 1 and 1000 for hits_per_page + - Lists of text strings for facet_filters + """
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/tests/core/api/algolia_test.py(2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
algolia_search(44-90)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: CodeQL (python)
- GitHub Check: CodeQL (javascript-typescript)
- GitHub Check: Run backend tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run frontend unit tests
🔇 Additional comments (5)
backend/tests/core/api/algolia_test.py (5)
4-4: Import for hypothesis added correctlyGood job importing the
hypothesislibrary for fuzz testing. This is a powerful tool for property-based testing that can help identify edge cases and improve robustness.
143-149: Good use of hypothesis strategiesThe strategies chosen for each parameter are appropriate:
- Text strategies for
index_name,query, and items infacet_filters- Integer strategy with sensible constraints for
page(min_value=1)- Integer strategy with appropriate bounds for
hits_per_page(1-1000)- List strategy for
facet_filtersThese cover a broad range of inputs that could potentially trigger edge cases.
150-154: Correctly implemented mock request with IP addressGreat job including the
HTTP_X_FORWARDED_FORheader with the client IP address. This addresses a previous review comment and ensures that the test properly handles the case whenindex_nameis "chapters", which requires the IP address for the cache key.
155-163: Well-structured JSON for the request bodyThe request body is correctly structured with all the parameters needed for the Algolia search function. Using
json.dumps()ensures proper JSON formatting.
165-167: Improved assertion for response status codeThe assertion has been improved to check that the response status code is either
requests.codes.ok(200) orrequests.codes.bad_request(400), addressing the previous review comment. This ensures that the function handles both valid and invalid inputs correctly without returning a server error.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
backend/tests/core/api/algolia_test.py (1)
143-167: Good implementation of fuzz testing for algolia_search, but consider adding constraints to strategiesThe fuzz testing implementation looks good and addresses the previous review comments with the addition of the IP address and proper assertions. However, the strategies for
pageandhits_per_pagedon't have any constraints, which might generate values that will always fail validation.Consider updating the strategies to match the validation constraints in the production code:
@given( index_name=strategies.text(), query=strategies.text(), - page=strategies.integers(), + page=strategies.integers(min_value=1), - hits_per_page=strategies.integers(), + hits_per_page=strategies.integers(min_value=1, max_value=1000), facet_filters=strategies.lists(strategies.text()), )This will make the test more focused on finding edge cases within valid input ranges rather than testing inputs that would be rejected by validation.
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
1-41: Fix linting issues with whitespace and newlineThe file has several whitespace issues flagged by the static analysis tool.
Remove trailing whitespace from blank lines and add a newline at the end of the file:
# Remove whitespace from blank lines on lines 24, 26, 28, 37, 39, 41 # Add newline at the end of the file🧰 Tools
🪛 Ruff (0.8.2)
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
1-39: Fix whitespace issues in blank linesThere are whitespace issues in blank lines that should be fixed.
Remove trailing whitespace from blank lines 34 and 37 as flagged by the static analysis tool.
🧰 Tools
🪛 Ruff (0.8.2)
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/tests/core/api/algolia_test.py(2 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py(1 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py(1 hunks)
🧰 Additional context used
🧬 Code Definitions (2)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
algolia_search(44-90)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
test_handler_responses(32-40)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
🔇 Additional comments (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
1-41:Details
✅ Verification successful
Directory name has a typo
The path
backend/tests/slack/events/fuzz_tests/user_joind_channelcontains a typo injoindwhich should bejoined.Consider renaming this directory to fix the typo and ensure path consistency with the rest of the codebase.
🏁 Script executed:
#!/bin/bash # Check if the correct directory exists fd "user_joined_channel" --type d # Check if the typo exists elsewhere in the codebase fd "user_joind_channel" --type dLength of output: 172
Directory Name Typo Resolved
The directory in question is now correctly named. The previous typo (
user_joind_channel) has been fixed touser_joined_channelin thefuzz_testsdirectory.
- The file path now reads:
backend/tests/slack/events/fuzz_tests/user_joined_channel/gsoc_test.py🧰 Tools
🪛 Ruff (0.8.2)
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
There was a problem hiding this comment.
🛠️ Refactor suggestion
Strengthen handler_responses test with mock verifications
The current test executes the handler but doesn't verify that it behaves correctly based on the events_enabled value.
Update the test to verify the expected behavior:
@given(
events_enabled=st.booleans(),
)
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
- assert True
+ # Verify that the Slack client was used appropriately based on events_enabled
+ if events_enabled:
+ mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
+ mock_slack_client.chat_postMessage.assert_called()
+ else:
+ # If events are disabled, no Slack client operations should be performed
+ mock_slack_client.conversations_open.assert_not_called()
+ mock_slack_client.chat_postMessage.assert_not_called()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @given( | |
| events_enabled=st.booleans(), | |
| ) | |
| def test_handler_responses(self, events_enabled): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| assert True | |
| @given( | |
| events_enabled=st.booleans(), | |
| ) | |
| def test_handler_responses(self, events_enabled): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| # Verify that the Slack client was used appropriately based on events_enabled | |
| if events_enabled: | |
| mock_slack_client.conversations_open.assert_called_once_with(user="U123456") | |
| mock_slack_client.chat_postMessage.assert_called() | |
| else: | |
| # If events are disabled, no Slack client operations should be performed | |
| mock_slack_client.conversations_open.assert_not_called() | |
| mock_slack_client.chat_postMessage.assert_not_called() |
🧰 Tools
🪛 Ruff (0.8.2)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add constraints to fuzz test parameters and strengthen assertions
The current test doesn't place constraints on generated integers and uses a trivial assertion.
Update the test with improved constraints and assertions:
@given(
events_enabled=st.booleans(),
- project_count=st.integers(),
- issue_count=st.integers(),
+ project_count=st.integers(min_value=0),
+ issue_count=st.integers(min_value=0),
)
@patch("apps.owasp.models.project.Project.active_projects_count")
@patch("apps.github.models.issue.Issue.open_issues_count")
def test_handler_responses(
self,
mock_open_issues_count,
mock_active_projects_count,
events_enabled,
project_count,
issue_count,
):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_active_projects_count.return_value = project_count
mock_open_issues_count.return_value = issue_count
mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
- assert True
+ # Verify the behavior based on whether events are enabled
+ if events_enabled:
+ mock_slack_client.conversations_open.assert_called_once_with(user="U123456")
+ mock_slack_client.chat_postMessage.assert_called()
+ # You could also verify that the message contains the project_count and issue_count
+ else:
+ mock_slack_client.conversations_open.assert_not_called()
+ mock_slack_client.chat_postMessage.assert_not_called()Using non-negative integers for counts makes the test more realistic, as negative counts wouldn't make sense in this context.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @given( | |
| events_enabled=st.booleans(), | |
| project_count=st.integers(), | |
| issue_count=st.integers(), | |
| ) | |
| @patch("apps.owasp.models.project.Project.active_projects_count") | |
| @patch("apps.github.models.issue.Issue.open_issues_count") | |
| def test_handler_responses( | |
| self, | |
| mock_open_issues_count, | |
| mock_active_projects_count, | |
| events_enabled, | |
| project_count, | |
| issue_count, | |
| ): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_active_projects_count.return_value = project_count | |
| mock_open_issues_count.return_value = issue_count | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| assert True | |
| @given( | |
| events_enabled=st.booleans(), | |
| project_count=st.integers(min_value=0), | |
| issue_count=st.integers(min_value=0), | |
| ) | |
| @patch("apps.owasp.models.project.Project.active_projects_count") | |
| @patch("apps.github.models.issue.Issue.open_issues_count") | |
| def test_handler_responses( | |
| self, | |
| mock_open_issues_count, | |
| mock_active_projects_count, | |
| events_enabled, | |
| project_count, | |
| issue_count, | |
| ): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_active_projects_count.return_value = project_count | |
| mock_open_issues_count.return_value = issue_count | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| # Verify the behavior based on whether events are enabled | |
| if events_enabled: | |
| mock_slack_client.conversations_open.assert_called_once_with(user="U123456") | |
| mock_slack_client.chat_postMessage.assert_called() | |
| # You could also verify that the message contains the project_count and issue_count | |
| else: | |
| mock_slack_client.conversations_open.assert_not_called() | |
| mock_slack_client.chat_postMessage.assert_not_called() |
🧰 Tools
🪛 Ruff (0.8.2)
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
backend/tests/core/api/algolia_test.py (1)
143-149: 🛠️ Refactor suggestionSpecify appropriate boundaries for integer strategies
The
pageandhits_per_pagestrategies should have appropriate bounds to align with real-world constraints and avoid generating invalid inputs that would make the test unreliable.@given( index_name=strategies.text(), query=strategies.text(), - page=strategies.integers(), - hits_per_page=strategies.integers(), + page=strategies.integers(min_value=1), + hits_per_page=strategies.integers(min_value=1, max_value=1000), facet_filters=strategies.lists(strategies.text()), )
🧹 Nitpick comments (1)
backend/tests/core/api/algolia_test.py (1)
150-167: The fuzz test implementation is good, but could benefit from a small enhancementGreat job implementing this fuzz test! The inclusion of the IP address and assertion for both success and bad request codes matches the expected behavior of the
algolia_searchfunction.Consider adding an additional test that specifically tests malformed JSON handling:
@given(invalid_json=strategies.text()) def test_fuzz_algolia_search_malformed_json(self, invalid_json): """Test the algolia_search function with malformed JSON.""" mock_request = Mock() mock_request.method = "POST" mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} mock_request.body = invalid_json response = algolia_search(mock_request) assert response.status_code == requests.codes.server_error
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/tests/core/api/algolia_test.py(2 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py(1 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py(1 hunks)
🧰 Additional context used
🧬 Code Definitions (3)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1) (1)
test_handler_responses(20-38)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
test_handler_responses(32-40)
backend/tests/core/api/algolia_test.py (1)
backend/apps/core/api/algolia.py (1) (1)
algolia_search(44-90)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
🔇 Additional comments (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
13-38:Details
❓ Verification inconclusive
Add meaningful assertions to validate the contribute handler behavior
The current test with
assert Trueonly verifies the code doesn't throw an exception, but doesn't validate thatcontribute_handleractually performs the expected actions based on the input parameters.def test_handler_responses( self, mock_open_issues_count, mock_active_projects_count, events_enabled, project_count, issue_count, ): settings.SLACK_EVENTS_ENABLED = events_enabled mock_active_projects_count.return_value = project_count mock_open_issues_count.return_value = issue_count mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID} mock_slack_client = MagicMock() mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) - assert True + # When events are enabled, the client should attempt to open a conversation + if events_enabled: + mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"]) + + # Verify the correct count values were accessed + mock_active_projects_count.assert_called_once() + mock_open_issues_count.assert_called_once() + else: + # When events are disabled, no client interaction should occur + mock_slack_client.conversations_open.assert_not_called()Additionally, fix the formatting issues highlighted by the static analyzer:
🏁 Script executed:
#!/bin/bash # Replace trailing whitespace sed -i 's/[[:space:]]*$//' backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.pyLength of output: 104
Test Enhancements & Formatting Fixes: Please Verify
- The trailing whitespace issues in
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.pyhave been cleaned up.- The test should now include meaningful assertions to verify that the
contribute_handlerbehaves as expected:
- When
settings.SLACK_EVENTS_ENABLEDis true, the Slack client’sconversations_openmust be called with the correct user and bothactive_projects_countandopen_issues_countshould be accessed exactly once.- When
settings.SLACK_EVENTS_ENABLEDis false, no call toconversations_openshould occur.- Please confirm that the following diff (replacing
assert True) has been incorporated into the test:def test_handler_responses( self, mock_open_issues_count, mock_active_projects_count, events_enabled, project_count, issue_count, ): settings.SLACK_EVENTS_ENABLED = events_enabled mock_active_projects_count.return_value = project_count mock_open_issues_count.return_value = issue_count mock_slack_event = {"user": "U123456", "channel": OWASP_CONTRIBUTE_CHANNEL_ID} mock_slack_client = MagicMock() mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) - assert True + if events_enabled: + mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"]) + mock_active_projects_count.assert_called_once() + mock_open_issues_count.assert_called_once() + else: + mock_slack_client.conversations_open.assert_not_called()Action: Please verify manually that the meaningful assertions have been applied alongside the formatting fixes to confidently validate the handler’s behavior.
🧰 Tools
🪛 Ruff (0.8.2)
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
Verify handler behavior with concrete assertions
Similar to the previous test method, this test only checks that the code runs without exceptions, but doesn't validate that the handler behaves as expected when events are enabled or disabled.
def test_handler_responses(self, events_enabled):
settings.SLACK_EVENTS_ENABLED = events_enabled
mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID}
mock_slack_client = MagicMock()
mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}}
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
- assert True
+ # When events are enabled, the client should attempt to open a conversation
+ if events_enabled:
+ mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+ else:
+ mock_slack_client.conversations_open.assert_not_called()Additionally, fix the formatting issues highlighted by the static analyzer:
🏁 Script executed:
#!/bin/bash
# Replace trailing whitespace and ensure a newline at end of file
sed -i 's/[[:space:]]*$//' backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
echo '' >> backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.pyLength of output: 108
Action Required: Add concrete assertions for Slack handler behavior
The current test in
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
merely ensures that no exceptions occur instead of validating the actual behavior. Please update the test as follows:
- When events are enabled: Assert that
mock_slack_client.conversations_openis called once with the properuserattribute frommock_slack_event. - When events are disabled: Assert that
mock_slack_client.conversations_openis never called.
For example, consider replacing:
gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock())
- assert True
+ if events_enabled:
+ mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"])
+ else:
+ mock_slack_client.conversations_open.assert_not_called()Additionally, the formatting issues reported (trailing whitespace removal and ensuring a trailing newline) have been addressed by the static analyzer script. Please verify manually that these formatting fixes have been applied correctly to the file.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @given( | |
| events_enabled=st.booleans(), | |
| ) | |
| def test_handler_responses(self, events_enabled): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| assert True | |
| @given( | |
| events_enabled=st.booleans(), | |
| ) | |
| def test_handler_responses(self, events_enabled): | |
| settings.SLACK_EVENTS_ENABLED = events_enabled | |
| mock_slack_event = {"user": "U123456", "channel": OWASP_GSOC_CHANNEL_ID} | |
| mock_slack_client = MagicMock() | |
| mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} | |
| gsoc_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) | |
| # When events are enabled, the client should attempt to open a conversation | |
| if events_enabled: | |
| mock_slack_client.conversations_open.assert_called_once_with(user=mock_slack_event["user"]) | |
| else: | |
| mock_slack_client.conversations_open.assert_not_called() |
🧰 Tools
🪛 Ruff (0.8.2)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (7)
backend/tests/core/api/algolia_test.py (1)
167-167: Add more specific assertions for different input scenarios.The current assertion only checks that the response status code is either OK or BAD_REQUEST, which verifies the handler doesn't crash. However, it would be more valuable to verify specific behavior based on input patterns.
- assert response.status_code in [requests.codes.ok, requests.codes.bad_request] + # For valid inputs (positive page and reasonable hits_per_page), expect successful response + if page > 0 and 1 <= hits_per_page <= 1000: + # Some inputs might still be invalid based on other validations + assert response.status_code in [requests.codes.ok, requests.codes.bad_request] + else: + # For invalid inputs, expect bad request + assert response.status_code == requests.codes.bad_request + + # Ensure we never get server errors + assert response.status_code != requests.codes.server_errorbackend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (3)
1-39: Directory name contains a typo.The directory name
user_joind_channelcontains a typo and should beuser_joined_channelto maintain consistency with the actual event name.The file is in a directory with a typo:
user_joind_channelshould beuser_joined_channel.🧰 Tools
🪛 Ruff (0.8.2)
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
34-37: Fix whitespace in blank lines.There are blank lines containing whitespace, which should be removed for consistency.
mock_slack_client = MagicMock() mock_slack_client.conversations_open.return_value = {"channel": {"id": "C123456"}} - + contribute_handler(event=mock_slack_event, client=mock_slack_client, ack=MagicMock()) - +🧰 Tools
🪛 Ruff (0.8.2)
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
13-17: Add constraints to integer strategies.The current implementation uses unconstrained integers for project and issue counts, which could lead to unrealistic test scenarios and potential overflow issues.
@given( events_enabled=st.booleans(), - project_count=st.integers(), - issue_count=st.integers(), + project_count=st.integers(min_value=0, max_value=10000), + issue_count=st.integers(min_value=0, max_value=10000), )This change provides more realistic bounds for these values while still testing a wide range of scenarios.
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (3)
1-41: Directory name contains a typo.The directory name
user_joind_channelcontains a typo and should beuser_joined_channelto maintain consistency with the actual event name.The file is in a directory with a typo:
user_joind_channelshould beuser_joined_channel.🧰 Tools
🪛 Ruff (0.8.2)
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
41-41: Add newline at end of file.The file is missing a newline at the end, which is a common convention for source files.
Add a newline at the end of the file.
🧰 Tools
🪛 Ruff (0.8.2)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
24-40: Fix whitespace in blank lines.There are multiple blank lines containing whitespace throughout the file, which should be removed for consistency.
Remove whitespace from all blank lines in the file (lines 24, 26, 28, 37, 39, and 41).
🧰 Tools
🪛 Ruff (0.8.2)
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
backend/pyproject.toml(1 hunks)backend/tests/core/api/algolia_test.py(2 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py(1 hunks)backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/pyproject.toml
🧰 Additional context used
🧬 Code Definitions (2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1) (1)
test_handler_responses(20-38)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py (1)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py (1) (1)
test_handler_responses(32-40)
🪛 Ruff (0.8.2)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
28-28: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
39-39: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: Blank line contains whitespace
Remove whitespace from blank line
(W293)
41-41: No newline at end of file
Add trailing newline
(W292)
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py
34-34: Blank line contains whitespace
Remove whitespace from blank line
(W293)
37-37: Blank line contains whitespace
Remove whitespace from blank line
(W293)
There was a problem hiding this comment.
🛠️ Refactor suggestion
Constrain strategy value ranges for page and hits_per_page.
Currently, the page and hits_per_page strategies allow any integer value, including negatives and zero, which are likely invalid inputs in the production code. The existing unit tests in this file (lines 107-110) show that validation is needed for these parameters.
- page=strategies.integers(),
- hits_per_page=strategies.integers(),
+ page=strategies.integers(min_value=1),
+ hits_per_page=strategies.integers(min_value=1, max_value=1000),This change aligns with the validation being performed in the implementation and matches the constraints in the parametrized tests above.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| query=strategies.text(), | |
| page=strategies.integers(), | |
| hits_per_page=strategies.integers(), | |
| facet_filters=strategies.lists(strategies.text()), | |
| query=strategies.text(), | |
| page=strategies.integers(min_value=1), | |
| hits_per_page=strategies.integers(min_value=1, max_value=1000), | |
| facet_filters=strategies.lists(strategies.text()), |
backend/tests/slack/events/fuzz_tests/user_joind_channel/contribute_test.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
🛠️ Refactor suggestion
Simplify the dynamic import approach in test_check_gsoc_handler.
The current implementation uses a complex dynamic import approach with a fallback lambda. This makes the test harder to understand and may hide issues with the actual check_gsoc_handler function.
- def test_check_gsoc_handler(self, channel_id):
- gsoc_module = __import__(
- "apps.slack.events.member_joined_channel.gsoc",
- fromlist=["gsoc_handler"],
- )
- check_gsoc_handler = getattr(
- gsoc_module,
- "check_gsoc_handler",
- lambda x: x.get("channel") == OWASP_GSOC_CHANNEL_ID,
- )
-
- check_gsoc_handler({"channel": channel_id})
-
- assert True
+ def test_check_gsoc_handler(self, channel_id):
+ """Test that check_gsoc_handler correctly identifies the GSOC channel."""
+ from apps.slack.events.member_joined_channel.gsoc import check_gsoc_handler
+
+ result = check_gsoc_handler({"channel": channel_id})
+
+ # Verify the function returns True only for the GSOC channel
+ expected = (channel_id == OWASP_GSOC_CHANNEL_ID)
+ assert result == expectedThis approach directly imports the function, tests it with the generated channel_id, and verifies it returns the expected result based on whether the channel matches the GSOC channel ID.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_check_gsoc_handler(self, channel_id): | |
| gsoc_module = __import__( | |
| "apps.slack.events.member_joined_channel.gsoc", | |
| fromlist=["gsoc_handler"], | |
| ) | |
| check_gsoc_handler = getattr( | |
| gsoc_module, | |
| "check_gsoc_handler", | |
| lambda x: x.get("channel") == OWASP_GSOC_CHANNEL_ID, | |
| ) | |
| check_gsoc_handler({"channel": channel_id}) | |
| assert True | |
| def test_check_gsoc_handler(self, channel_id): | |
| """Test that check_gsoc_handler correctly identifies the GSOC channel.""" | |
| from apps.slack.events.member_joined_channel.gsoc import check_gsoc_handler | |
| result = check_gsoc_handler({"channel": channel_id}) | |
| # Verify the function returns True only for the GSOC channel | |
| expected = (channel_id == OWASP_GSOC_CHANNEL_ID) | |
| assert result == expected |
🧰 Tools
🪛 Ruff (0.8.2)
24-24: Blank line contains whitespace
Remove whitespace from blank line
(W293)
26-26: Blank line contains whitespace
Remove whitespace from blank line
(W293)
backend/tests/slack/events/fuzz_tests/user_joind_channel/gsoc_test.py
Outdated
Show resolved
Hide resolved
|
@arkid15r Could you review the current changes before proceeding with the GraphQL endpoints? |
|
is someone still working on this? |
Yes, I just was discussing with the mentor. I will add the GraphQL tests tomorrow after resolving the new big conflicts :) |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
backend/tests/apps/core/api/algolia_test.py (5)
143-149: Refine hypothesis strategies to use more realistic bounds.The current strategies for generating test data are too broad and might produce extreme values that aren't representative of real API usage. This could lead to unnecessary test failures or miss important edge cases.
@given( - index_name=strategies.text(), - query=strategies.text(), - page=strategies.integers(), - hits_per_page=strategies.integers(), + index_name=strategies.text(min_size=1, max_size=100), + query=strategies.text(max_size=200), + page=strategies.integers(min_value=0, max_value=1000), + hits_per_page=strategies.integers(min_value=1, max_value=1000), facet_filters=strategies.lists(strategies.text()), )
165-167: Assert additional properties for better test coverage.The current assertion only checks the status code, which is a good start but doesn't fully validate the behavior of the function. For a more comprehensive test, we should also verify response content for both successful and error cases.
response = algolia_search(mock_request) assert response.status_code in [requests.codes.ok, requests.codes.bad_request] +# Additional assertions based on status code +response_data = json.loads(response.content) +if response.status_code == requests.codes.ok: + assert "hits" in response_data + assert "nbPages" in response_data +elif response.status_code == requests.codes.bad_request: + assert "error" in response_data
143-167: Consider adding assume statements to help Hypothesis generate valid inputs.Hypothesis works more efficiently when it can avoid generating inputs that will always fail in uninteresting ways. Using
hypothesis.assume()can help focus the test on more meaningful scenarios.@given( index_name=strategies.text(), query=strategies.text(), page=strategies.integers(), hits_per_page=strategies.integers(), facet_filters=strategies.lists(strategies.text()), ) def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): """Test the algolia_search function with fuzz testing.""" + # Skip tests with inputs that will always fail in the same way + from hypothesis import assume + assume(isinstance(index_name, str) and len(index_name) > 0) + assume(isinstance(query, str)) + assume(isinstance(page, int) and page >= 0) + assume(isinstance(hits_per_page, int) and hits_per_page > 0) + assume(isinstance(facet_filters, list)) + with patch( "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS ): mock_request = Mock() mock_request.method = "POST" mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} mock_request.body = json.dumps( { "facetFilters": facet_filters, "hitsPerPage": hits_per_page, "indexName": index_name, "page": page, "query": query, } ) response = algolia_search(mock_request) assert response.status_code in [requests.codes.ok, requests.codes.bad_request]
143-167: Consider testing JSON serialization errors.The current test doesn't account for potential JSON serialization errors when complex objects are passed as parameters. Adding a test case for this scenario would improve test coverage.
Since this would require a separate test case that isn't directly part of the fuzz testing framework, here's a suggestion for an additional test method:
def test_algolia_search_invalid_json(self): """Test the scenario where the request body contains invalid JSON.""" mock_request = Mock() mock_request.method = "POST" mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} # Set body to invalid JSON mock_request.body = "{" response = algolia_search(mock_request) response_data = json.loads(response.content) assert response.status_code == requests.codes.bad_request assert "error" in response_data
143-167: Consider grouping test scenarios based on expected response.The current implementation treats all inputs as potentially valid or invalid. A more structured approach would be to separate test cases based on expected outcome, which would allow for more specific assertions.
This could be implemented by creating separate fuzz tests for valid and invalid scenarios:
- A test for valid inputs that assumes or enforces valid parameter values and asserts an OK response.
- A test for invalid inputs that introduces specific validation failures and asserts a BAD_REQUEST response.
This approach would provide more focused testing and clearer failure messages, making it easier to maintain and debug the tests.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
backend/tests/apps/core/api/algolia_test.py(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: CodeQL (javascript-typescript)
- GitHub Check: Run frontend e2e tests
There was a problem hiding this comment.
Missing mock for get_search_results function.
Unlike the other test methods in this class, this fuzz test doesn't mock the get_search_results function. Without mocking, the test could make actual external API calls, which would make the test less reliable and potentially slow.
@given(
index_name=strategies.text(),
query=strategies.text(),
page=strategies.integers(),
hits_per_page=strategies.integers(),
facet_filters=strategies.lists(strategies.text()),
)
def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters):
"""Test the algolia_search function with fuzz testing."""
+ with patch(
+ "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS
+ ):
mock_request = Mock()
mock_request.method = "POST"
mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS}
mock_request.body = json.dumps(
{
"facetFilters": facet_filters,
"hitsPerPage": hits_per_page,
"indexName": index_name,
"page": page,
"query": query,
}
)
response = algolia_search(mock_request)
assert response.status_code in [requests.codes.ok, requests.codes.bad_request]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @given( | |
| index_name=strategies.text(), | |
| query=strategies.text(), | |
| page=strategies.integers(), | |
| hits_per_page=strategies.integers(), | |
| facet_filters=strategies.lists(strategies.text()), | |
| ) | |
| def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): | |
| """Test the algolia_search function with fuzz testing.""" | |
| mock_request = Mock() | |
| mock_request.method = "POST" | |
| mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} | |
| mock_request.body = json.dumps( | |
| { | |
| "facetFilters": facet_filters, | |
| "hitsPerPage": hits_per_page, | |
| "indexName": index_name, | |
| "page": page, | |
| "query": query, | |
| } | |
| ) | |
| response = algolia_search(mock_request) | |
| assert response.status_code in [requests.codes.ok, requests.codes.bad_request] | |
| @given( | |
| index_name=strategies.text(), | |
| query=strategies.text(), | |
| page=strategies.integers(), | |
| hits_per_page=strategies.integers(), | |
| facet_filters=strategies.lists(strategies.text()), | |
| ) | |
| def test_fuzz_algolia_search(self, index_name, query, page, hits_per_page, facet_filters): | |
| """Test the algolia_search function with fuzz testing.""" | |
| with patch( | |
| "apps.core.api.algolia.get_search_results", return_value=MOCKED_SEARCH_RESULTS | |
| ): | |
| mock_request = Mock() | |
| mock_request.method = "POST" | |
| mock_request.META = {"HTTP_X_FORWARDED_FOR": CLIENT_IP_ADDRESS} | |
| mock_request.body = json.dumps( | |
| { | |
| "facetFilters": facet_filters, | |
| "hitsPerPage": hits_per_page, | |
| "indexName": index_name, | |
| "page": page, | |
| "query": query, | |
| } | |
| ) | |
| response = algolia_search(mock_request) | |
| assert response.status_code in [requests.codes.ok, requests.codes.bad_request] |
…keFile and install model_bakery for creating randomized data for all models.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/docker/entrypoint.fuzz.sh (1)
10-11: Add CSRF token validation before creating config file.The script fetches the CSRF token but doesn't validate it before writing to the config file. If the
/csrfendpoint returns unexpected JSON or an empty token, the config will contain an invalid/empty token, causing graphqler to fail with obscure errors.🔎 Proposed fix
echo "Fetching CSRF token..." -CSRF_TOKEN=$(curl -fsSL "$BASE_URL/csrf" | jq -r '.csrftoken') +CSRF_TOKEN=$(curl -fsSL "$BASE_URL/csrf" | jq -r '.csrftoken' 2>/dev/null || echo "") + +if [ -z "$CSRF_TOKEN" ] || [ "$CSRF_TOKEN" = "null" ]; then + echo "Error: Failed to fetch valid CSRF token from backend." >&2 + exit 1 +fi
🧹 Nitpick comments (5)
.gitignore (1)
49-51: Duplicate ignore entry.Line 51 duplicates the
.cursor/rules/snyk_rules.mdcentry already defined on line 6. Consider removing the duplicate.🔎 Proposed fix
# Snyk Security Extension - AI Rules (auto-generated) -.cursor/rules/snyk_rules.mdcdocker-compose/fuzz.yaml (1)
44-49: Consider adding--cleanflag or error handling forpg_restore.The
pg_restorecommand may fail if the backend has already applied migrations, as the schema/tables would already exist. Consider adding the--cleanflag to drop objects before recreating them, or wrap with error handling to allow partial restores.🔎 Proposed fix
command: > sh -c ' echo "Loading data from dump..." && - pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB /data/nest.dump && + pg_restore -h db -U $$POSTGRES_USER -d $$POSTGRES_DB --clean --if-exists /data/nest.dump || true && echo "Data loading completed." 'Note: Using
|| trueallows the job to succeed even if some objects can't be dropped. Remove it if strict restore is required.docker-compose/e2e.yaml (1)
28-49: Samepg_restoreconcern as in fuzz.yaml.See the recommendation for
docker-compose/fuzz.yamlregarding--cleanflag or error handling for thepg_restorecommand..github/workflows/run-ci-cd.yaml (2)
327-332: Inconsistent action version pins.The fuzz job uses different action SHAs compared to other jobs:
actions/checkouton line 329 uses@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3while others use@8e8c483db84b4bee98b60c0593521ed34d9990e8docker/setup-buildx-actionon line 332 uses@e468171a9de216ec08956ac3ada2f0791b6bd435while others use@8d2750c68a42422c14e847fe6c8ac0403b4cbd6fConsider using consistent version pins across all jobs to ensure reproducible behavior.
🔎 Proposed fix
steps: - name: Check out repository - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 + uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 - name: Set up Docker buildx - uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 + uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f
363-367: Add error handling forpg_restorein CI.Same as in the docker-compose files,
pg_restoremay fail if migrations have already created the schema. Consider adding--clean --if-existsflags or wrapping with|| trueif partial restore is acceptable.🔎 Proposed fix
- name: Load Postgres data env: PGPASSWORD: nest_user_fuzz_password run: | - pg_restore -h localhost -U nest_user_fuzz -d nest_db_fuzz < backend/data/nest.dump + pg_restore -h localhost -U nest_user_fuzz -d nest_db_fuzz --clean --if-exists backend/data/nest.dump || true
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/run-ci-cd.yaml.gitignorebackend/Makefilebackend/docker/entrypoint.fuzz.shdocker-compose/e2e.yamldocker-compose/fuzz.yamlfrontend/Makefile
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/Makefile
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-08-31T13:48:09.830Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:0-0
Timestamp: 2025-08-31T13:48:09.830Z
Learning: In the OWASP/Nest project, Django's GraphQL endpoint accepts CSRF tokens via 'x-csrftoken' header (lowercase) without requiring a Referer header, working fine in their configuration for GraphQL codegen introspection.
Applied to files:
backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-08-30T12:52:32.214Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:52-52
Timestamp: 2025-08-30T12:52:32.214Z
Learning: When using GraphQL Codegen with async configuration that needs to fetch data (like CSRF tokens), export the async function directly instead of using top-level await: `export default async function()` rather than `export default await function()`.
Applied to files:
backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
frontend/Makefile.github/workflows/run-ci-cd.yamldocker-compose/e2e.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.
Applied to files:
.github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Run frontend unit tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run backend tests
- GitHub Check: Run fuzz tests
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (7)
.gitignore (1)
7-12: LGTM!The
.gitignoreadditions correctly exclude fuzzing artifacts while preserving the example environment file for fuzz testing.backend/docker/entrypoint.fuzz.sh (1)
13-25: LGTM!The config file creation and graphqler invocation are properly structured with correct quoting of variables. The
MAX_TIME = 300setting provides a reasonable 5-minute timeout for fuzz runs.docker-compose/fuzz.yaml (1)
68-81: LGTM!The graphql service correctly mounts
fuzzing_resultsto persist fuzz test outputs, addressing the artifact persistence concern. TheBASE_URLenvironment variable and backend dependency are properly configured.docker-compose/e2e.yaml (1)
69-83: Consider adding dependency ondata-loadercompletion.The
e2e-testsservice depends only onbackendbeing healthy, but the tests may require data loaded bydata-loader. Sincedata-loaderruns and exits, consider whether tests could start before data loading completes.Note: If the Makefile orchestrates this by running
data-loaderfirst with--abort-on-container-exit, then runninge2e-testsseparately, this may be intentional. Verify the test execution flow handles this correctly.frontend/Makefile (1)
59-66: LGTM!The two-phase approach correctly orchestrates the E2E test flow:
- First phase loads data via
data-loaderwith--abort-on-container-exitensuring data loading completes before proceeding.- Second phase runs the actual E2E tests.
The pre-cleanup of the database container and volume ensures a clean state for each test run.
.github/workflows/run-ci-cd.yaml (2)
306-312: Fuzz tests don't block staging/production deployments.The
run-fuzz-testsjob runs independently and isn't included in theneedsforbuild-staging-images(line 405-408) orbuild-production-images(line 732-736). Consider whether fuzz test failures should block deployments or if this is intentional (e.g., fuzz tests are advisory only).
390-395: No action required.actions/upload-artifact@v6is the current latest stable version of this GitHub Action.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/run-ci-cd.yaml (1)
328-332: Inconsistent action version pins.The
actions/checkoutanddocker/setup-buildx-actionversions in this job differ from those used elsewhere in the workflow:
Action This job Other jobs actions/checkout1af3b93...8e8c483...docker/setup-buildx-actione468171...8d2750c...Using consistent versions across the workflow simplifies maintenance and ensures predictable behavior.
🔎 Proposed fix
steps: - name: Check out repository - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 + uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 - name: Set up Docker buildx - uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 + uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.
Applied to files:
.github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Run backend tests
- GitHub Check: Run frontend unit tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run fuzz tests
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (3)
.github/workflows/run-ci-cd.yaml (3)
240-273: LGTM - E2E backend setup refactored cleanly.The refactored E2E setup using the composite action and the new flow (setup → start → wait → load data) is well-structured. Migrations run before Gunicorn starts, and the 5-minute timeout for backend readiness is reasonable.
383-389: Container user permissions look correct.The fuzz test container runs with
--user $(id -u):$(id -g)and mounts the results directory, ensuring artifacts are created with the correct ownership for upload. This is a good practice.
306-312: Verify if fuzz tests should gate deployments.The
run-fuzz-testsjob runs independently and doesn't blockbuild-staging-imagesorbuild-production-images. This means fuzz test failures won't prevent deployments.If fuzz tests uncover security issues, you may want them to gate releases. Consider whether
build-staging-imagesshould includerun-fuzz-testsin itsneedslist.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/run-ci-cd.yaml (1)
406-410: Add fuzz tests as a deployment gate.The
run-fuzz-testsjob is not included in theneedsdependencies ofbuild-staging-imagesorbuild-production-images. This means deployments will proceed even if fuzz tests discover crashes, unhandled exceptions, or security vulnerabilities—defeating the purpose of CI/CD integration.Per the acceptance criteria in issue #1075, fuzz tests should block deployments when they capture unexpected behavior.
🔎 Proposed fix
Add
run-fuzz-teststo the needs array of both build jobs:build-staging-images: name: Build Staging Images env: RELEASE_VERSION: ${{ needs.set-release-version.outputs.release_version }} environment: staging if: | github.repository == 'OWASP/Nest' && github.ref == 'refs/heads/main' needs: - run-backend-tests - run-frontend-e2e-tests - run-frontend-unit-tests + - run-fuzz-tests - set-release-versionbuild-production-images: name: Build Production Images env: RELEASE_VERSION: ${{ needs.set-release-version.outputs.release_version }} environment: production if: | github.event_name == 'release' && github.event.action == 'published' needs: - run-backend-tests - run-frontend-e2e-tests - run-frontend-unit-tests + - run-fuzz-tests - set-release-versionAlso applies to: 734-737
♻️ Duplicate comments (1)
.github/workflows/run-ci-cd.yaml (1)
391-396: Pin artifact action SHA and ensure upload on failure.The artifact upload step should run even when fuzz tests fail (that's when results are most valuable) and should use a pinned SHA consistent with the workflow's security posture.
🔎 Proposed fix
- name: Upload fuzzing results + if: always() - uses: actions/upload-artifact@v6 + uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f with: name: fuzzing-results path: fuzzing_results/ retention-days: 30
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.
Applied to files:
.github/workflows/run-ci-cd.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Run frontend unit tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run fuzz tests
- GitHub Check: Run backend tests
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
.github/workflows/run-ci-cd.yaml (1)
391-396: Addif: always()to ensure artifacts are uploaded on failure.Fuzz test results are most valuable when tests fail or find issues. Without
if: always(), the upload step is skipped if the fuzz tests step fails, losing potentially critical findings.🔎 Proposed fix
- name: Upload fuzzing results + if: always() uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f with: name: fuzzing-results path: fuzzing_results/ retention-days: 30
🧹 Nitpick comments (2)
.github/workflows/run-ci-cd.yaml (2)
246-257: Redundant port mapping with--network host.When using
--network host, the container shares the host's network namespace directly, making the-p 9000:9000flag unnecessary. The port is already accessible on the host.🔎 Proposed fix
- name: Start Backend in the background run: | docker run -d --rm --name e2e-nest-backend \ --env-file backend/.env.e2e.example \ --network host \ -e DJANGO_DB_HOST=localhost \ - -p 9000:9000 \ owasp/nest:test-backend-latest \ sh -c ' python manage.py migrate && gunicorn wsgi:application --bind 0.0.0.0:9000 '
340-351: Redundant port mapping with--network host.Same as the E2E job: the
-p 9500:9500flag is unnecessary when using--network host.🔎 Proposed fix
- name: Run backend with fuzz environment variables run: | docker run -d --rm --name fuzz-nest-backend \ --env-file backend/.env.fuzz.example \ --network host \ -e DJANGO_DB_HOST=localhost \ - -p 9500:9500 \ owasp/nest:test-backend-latest \ sh -c ' python manage.py migrate && gunicorn wsgi:application --bind 0.0.0.0:9500 '
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/run-ci-cd.yaml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.
Applied to files:
.github/workflows/run-ci-cd.yaml
🔇 Additional comments (2)
.github/workflows/run-ci-cd.yaml (2)
306-326: LGTM on job structure.The fuzz testing job is well-structured with appropriate timeout (15 minutes), proper Postgres service configuration with health checks, and correct dependencies on scan jobs.
369-389: LGTM on fuzz image build and test execution.The build step uses appropriate cache settings, and the volume mount for fuzzing results ensures results are persisted to the host for artifact upload.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker-compose/e2e.yaml (1)
71-85: Missing dependency ondata-loadermay cause tests to run before data is loaded.The
e2e-testsservice depends only onbackendbeing healthy, butdata-loaderis responsible for populating the database with test data. Without a dependency ondata-loader, there's a race condition where tests could start executing before the dump is fully restored.Consider adding
data-loaderwithservice_completed_successfullycondition:🔎 Proposed fix
e2e-tests: container_name: e2e-nest-tests build: context: ../frontend dockerfile: docker/Dockerfile.e2e.test command: > sh -c ' pnpm run test:e2e ' depends_on: backend: condition: service_healthy + data-loader: + condition: service_completed_successfully env_file: ../frontend/.env.e2e.example networks: - e2e-nest-network
♻️ Duplicate comments (2)
backend/Makefile (2)
166-172: Two-phase approach correctly sequences data loading before fuzzing.The two-phase
docker compose upapproach ensures data is loaded before GraphQL fuzzing begins. The first command loads data, and the second runs the fuzzer. This works around the dependency issue at the docker-compose level.Note:
COMPOSE_BAKE=truewas flagged in a previous review as an unused variable that has no effect on Docker Compose.
136-138: Pattern is consistent withrun-backend-e2e.The
run-backend-fuzztarget follows the same pattern asrun-backend-e2e. Note thatCOMPOSE_BAKE=truewas previously flagged as unused.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/Makefiledocker-compose/e2e.yamldocker-compose/fuzz.yaml
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: Ensure Redis image versions stay in sync across all environments by updating every relevant YAML file together (docker-compose files and CI/CD workflow configurations). When upgrading Redis, bump the image version in all docker-compose files and in all CI workflow YAMLs in one coordinated change to maintain parity across production, staging, local, E2E, and CI tests.
Applied to files:
docker-compose/e2e.yamldocker-compose/fuzz.yaml
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
docker-compose/e2e.yamlbackend/Makefile
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: For the OWASP/Nest repository, Redis image versions should remain consistent across all environments (production, staging, local, E2E, and CI/CD E2E tests). When upgrading Redis, update all docker-compose files and CI/CD workflow configurations together to maintain environment parity.
Applied to files:
backend/Makefile
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Run backend tests
- GitHub Check: Run frontend unit tests
- GitHub Check: Run frontend e2e tests
- GitHub Check: Run fuzz tests
🔇 Additional comments (5)
docker-compose/e2e.yaml (2)
30-51: Data loader service looks good; consider adding error verbosity.The
data-loaderservice correctly sequences after bothdbandbackendare healthy before restoring the dump. Consider adding--verbosetopg_restorefor better debugging in CI, and possibly|| exit 1after echo to ensure any silent failures are caught (thoughsh -cwith&&should propagate errors).
4-8: LGTM!Running migrations before starting Gunicorn ensures the database schema is consistent with the application code at startup.
docker-compose/fuzz.yaml (2)
80-81: Good: Volume mount for fuzzing results is now configured.The volume mount
../backend/fuzzing_results:/home/owasp/fuzzing_resultsaddresses the previously flagged concern about persisting GraphQL fuzzing artifacts. Results will now be available on the host for analysis and CI artifact collection.
1-69: LGTM!Good isolation from the e2e environment with distinct ports (9500 for backend, 5434 for DB), separate network (
fuzz-nest-network), and dedicated volume (fuzz-db-data). The service startup sequencing with health checks is correctly configured.backend/Makefile (1)
32-60: LGTM!The new
exec-*-commandtargets follow the established pattern with-ivariants for non-interactive use and-itvariants for interactive terminal sessions. Based on learnings, the-itflags are appropriate for local development and debugging.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/run-ci-cd.yaml (1)
385-389: Consider using more restrictive permissions for the results directory.The
chmod -R 777grants write access to all users. While this is functionally safe on ephemeral GitHub Actions runners,755or775would be sufficient since the container should run as a specific user (likelyowaspbased on the volume mount path).🔎 Suggested refinement
- name: Run backend fuzz tests run: | mkdir -p ${{ github.workspace }}/fuzzing_results && - chmod -R 777 ${{ github.workspace }}/fuzzing_results && + chmod -R 755 ${{ github.workspace }}/fuzzing_results && docker run -e BASE_URL=http://localhost:9500 --network host \ -v ${{ github.workspace }}/fuzzing_results:/home/owasp/fuzzing_results \ owasp/nest:test-fuzz-backend-latestThis tightens permissions while still allowing the container to write results. If the container user needs group write access, use
775instead.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/run-ci-cd.yamlbackend/docker/entrypoint.fuzz.sh
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.
📚 Learning: 2025-12-26T06:57:19.911Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: docker-compose/fuzz.yaml:70-83
Timestamp: 2025-12-26T06:57:19.911Z
Learning: In the fuzz testing environment (docker-compose/fuzz.yaml), the data-loader and graphql services are orchestrated in separate phases via the Makefile's test-fuzz target, not run simultaneously. The data is loaded to completion first, then the graphql fuzzer is started separately against the populated database. The docker-compose dependencies reflect runtime health checks rather than orchestration order.
Applied to files:
backend/docker/entrypoint.fuzz.sh.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-08-31T13:48:09.830Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:0-0
Timestamp: 2025-08-31T13:48:09.830Z
Learning: In the OWASP/Nest project, Django's GraphQL endpoint accepts CSRF tokens via 'x-csrftoken' header (lowercase) without requiring a Referer header, working fine in their configuration for GraphQL codegen introspection.
Applied to files:
backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-08-30T12:52:32.214Z
Learnt from: rudransh-shrivastava
Repo: OWASP/Nest PR: 2155
File: frontend/graphql-codegen.ts:52-52
Timestamp: 2025-08-30T12:52:32.214Z
Learning: When using GraphQL Codegen with async configuration that needs to fetch data (like CSRF tokens), export the async function directly instead of using top-level await: `export default async function()` rather than `export default await function()`.
Applied to files:
backend/docker/entrypoint.fuzz.sh
📚 Learning: 2025-10-26T12:50:50.512Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 2429
File: backend/Makefile:30-32
Timestamp: 2025-10-26T12:50:50.512Z
Learning: The `exec-backend-e2e-command` and `exec-db-e2e-command` Makefile targets in the backend/Makefile are intended for local development and debugging only, not for CI/CD execution, so the `-it` flags are appropriate.
Applied to files:
.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-26T06:08:58.549Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 3041
File: .github/workflows/run-ci-cd.yaml:233-243
Timestamp: 2025-12-26T06:08:58.549Z
Learning: For the OWASP/Nest repository, Redis image versions should remain consistent across all environments (production, staging, local, E2E, and CI/CD E2E tests). When upgrading Redis, update all docker-compose files and CI/CD workflow configurations together to maintain environment parity.
Applied to files:
.github/workflows/run-ci-cd.yaml
📚 Learning: 2025-12-21T19:03:59.068Z
Learnt from: ahmedxgouda
Repo: OWASP/Nest PR: 1139
File: .github/workflows/setup-backend-environment/action.yaml:16-27
Timestamp: 2025-12-21T19:03:59.068Z
Learning: Composite actions (runs: using: composite) execute as steps within the calling job's context and can access the job context, including job.services.* properties (e.g., job.services.<service_id>.id, job.services.<service_id>.ports). Service containers must be defined at the job level, but a composite action's steps can reference them via the job context.
Applied to files:
.github/workflows/run-ci-cd.yaml
🧬 Code graph analysis (1)
backend/docker/entrypoint.fuzz.sh (1)
backend/apps/ai/agent/agent.py (1)
run(27-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run CI Denendencies Scan
- GitHub Check: Run Code Scan
🔇 Additional comments (3)
backend/docker/entrypoint.fuzz.sh (1)
1-30: LGTM! Robust fuzz entrypoint implementation.The script properly validates the environment, fetches and validates the CSRF token with error handling, and configures the fuzzer with appropriate headers. All previous concerns have been addressed:
- BASE_URL validation prevents misconfiguration
- CSRF fetch uses
curl -fto fail on HTTP errors- Token validation catches empty or null responses
- Variables are properly quoted in the final command
The implementation is clean and follows shell scripting best practices.
.github/workflows/run-ci-cd.yaml (2)
240-273: LGTM! Well-structured E2E backend setup.The new backend environment setup, startup sequence, and data loading flow is clean and robust:
- Migrations run before the backend starts, preventing schema mismatch issues
- Readiness polling with a reasonable 5-minute timeout ensures the backend is fully operational
- Data loading via
pg_restoreis appropriate for the PostgreSQL-based E2E environmentThis pattern is well-suited for reuse in the fuzz job.
306-397: LGTM! Comprehensive fuzz testing integration.The fuzz job is well-implemented and mirrors the E2E job structure appropriately:
- Includes a 15-minute timeout to prevent indefinite execution
- Runs migrations before starting the backend
- Polls for backend readiness with proper timeout
- Loads test data via
pg_restorebefore fuzzing- Uses correct image tags and cache references (matching the update-nest-test-images workflow)
- Uploads artifacts with
if: always()to capture results even on failure- Volume mount enables result persistence from the ephemeral container
All previous review concerns have been addressed. The integration follows CI/CD best practices and provides good observability via artifact uploads.
de86aab to
0159b0d
Compare
|
* Add hypothesis to poetry and fuzz test the index endpoint. * Refactor import statements in algolia_test.py for consistency * Add the client ip address to the request META. * Add fuzz testing for Slack event handlers and refactor algolia fuzz testing. * Refactor fuzz tests for Slack event handlers to improve readability and structure * Fix the poetry lock file. * Remove fuzz testing from algolia_search unit tests * Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models. * Refactor to improve quality * Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration * Update poetry.lock to reflect dependency changes and version updates * Create a fuzz configuration, update docker file, makefile, and the tests. * Refactor fuzz configuration by reorganizing imports and cleaning up whitespace * Update Dockerfile and entrypoint script to use Alpine base image and shell * Run the server on port 8000 after the tests. * Create a docker compose file for fuzz testing. * Add 'graphqler' to custom dictionary * Load data from nest.json and add graphqler to cspell dict. * Remove model-bakery dependency from pyproject.toml and update poetry.lock * Update graphqler command in docker compose and the healthcheck * Update graphql command to use backend service URL in docker-compose * Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests * Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose * Update fuzz-test-backend command to abort on container exit * Add fuzz testing workflow and update image build steps * Add .env file creation step for fuzz tests in CI/CD workflow * Add Docker Hub login step for fuzz tests in CI/CD workflow * Refactor for the checks * Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling * Fix fuzz tests workflow: rename docker-compose file * Refactor fuzz-tests job. * Add environment variables for fuzz tests configuration * Update fuzz tests environment variables * Fix poetry lock file * Sort the custom-dict. * Update content hash in poetry.lock * Add docker cache mounts to the backend image * Add Redis configuration * refactor yaml * Add docker cache mounts to graphql file * Remove unnecessary chmod command for cache directories in Dockerfile * Fix poetry lock file * Add cache mounts to backend tests * Update cache mounts in graphql image * Update mkdir in graphql image * Remove duplicates * Update tests * Rename docker compose * Update poetry lock * Apply sonar * Migrate to OWASP repo * Update docker * Use graphqler maintainer docker image * Add disable permissions, update docker compose, and update entrypoint * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Update code for e2e * Add runs-on * Skip sonar and fix ci/cd * Apply rabbit suggestion and override entrypoint in ci/cd * Use env with csrf * Add timeout * Remove hypothesis and old test files * Apply rabbit's suggestions * Update ci/cd and makefile * Use digest pinning with graphqler image * Update dockerfile and fix the typeerror issue * Apply sonar suggestion * Apply sonar and rabbit suggestions * Remove cache from ci/cd * Use curl instead of wget * Separate e2e from fuzz * Update fuzz ci/cd * Update CI/CD * Run precommit * Update code * Update code * Update docs, ci/cd, and apply suggestions * Use digest pinning and parameters in workflow * Apply sonar suggestions * Apply rabbit suggestions * Run migrations for fuzz testing in ci/cd * Apply rabbit suggestions * Fix exceptions * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Add backend/data/nest.sql.gz * Update code * Automate data loading * Update dump_data to avoid exceptions * Update dump_data and automated data_loading * Update CI/CD * Update tests * Add timeout for fuzz tests * Update timeout for fuzz * Update CI/CD * Update CI/CD * Update CI/CD * Apply rabbit's suggestions * Update backend/Makefile * Update make targets and docker compose * Add volume for graphql fuzzing results and add upload artifacts in ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update docker compose and makefile * Apply rabbit's suggestions * Update dump to match the last nest.json.gz --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
* Add hypothesis to poetry and fuzz test the index endpoint. * Refactor import statements in algolia_test.py for consistency * Add the client ip address to the request META. * Add fuzz testing for Slack event handlers and refactor algolia fuzz testing. * Refactor fuzz tests for Slack event handlers to improve readability and structure * Fix the poetry lock file. * Remove fuzz testing from algolia_search unit tests * Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models. * Refactor to improve quality * Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration * Update poetry.lock to reflect dependency changes and version updates * Create a fuzz configuration, update docker file, makefile, and the tests. * Refactor fuzz configuration by reorganizing imports and cleaning up whitespace * Update Dockerfile and entrypoint script to use Alpine base image and shell * Run the server on port 8000 after the tests. * Create a docker compose file for fuzz testing. * Add 'graphqler' to custom dictionary * Load data from nest.json and add graphqler to cspell dict. * Remove model-bakery dependency from pyproject.toml and update poetry.lock * Update graphqler command in docker compose and the healthcheck * Update graphql command to use backend service URL in docker-compose * Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests * Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose * Update fuzz-test-backend command to abort on container exit * Add fuzz testing workflow and update image build steps * Add .env file creation step for fuzz tests in CI/CD workflow * Add Docker Hub login step for fuzz tests in CI/CD workflow * Refactor for the checks * Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling * Fix fuzz tests workflow: rename docker-compose file * Refactor fuzz-tests job. * Add environment variables for fuzz tests configuration * Update fuzz tests environment variables * Fix poetry lock file * Sort the custom-dict. * Update content hash in poetry.lock * Add docker cache mounts to the backend image * Add Redis configuration * refactor yaml * Add docker cache mounts to graphql file * Remove unnecessary chmod command for cache directories in Dockerfile * Fix poetry lock file * Add cache mounts to backend tests * Update cache mounts in graphql image * Update mkdir in graphql image * Remove duplicates * Update tests * Rename docker compose * Update poetry lock * Apply sonar * Migrate to OWASP repo * Update docker * Use graphqler maintainer docker image * Add disable permissions, update docker compose, and update entrypoint * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Update code for e2e * Add runs-on * Skip sonar and fix ci/cd * Apply rabbit suggestion and override entrypoint in ci/cd * Use env with csrf * Add timeout * Remove hypothesis and old test files * Apply rabbit's suggestions * Update ci/cd and makefile * Use digest pinning with graphqler image * Update dockerfile and fix the typeerror issue * Apply sonar suggestion * Apply sonar and rabbit suggestions * Remove cache from ci/cd * Use curl instead of wget * Separate e2e from fuzz * Update fuzz ci/cd * Update CI/CD * Run precommit * Update code * Update code * Update docs, ci/cd, and apply suggestions * Use digest pinning and parameters in workflow * Apply sonar suggestions * Apply rabbit suggestions * Run migrations for fuzz testing in ci/cd * Apply rabbit suggestions * Fix exceptions * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Add backend/data/nest.sql.gz * Update code * Automate data loading * Update dump_data to avoid exceptions * Update dump_data and automated data_loading * Update CI/CD * Update tests * Add timeout for fuzz tests * Update timeout for fuzz * Update CI/CD * Update CI/CD * Update CI/CD * Apply rabbit's suggestions * Update backend/Makefile * Update make targets and docker compose * Add volume for graphql fuzzing results and add upload artifacts in ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update docker compose and makefile * Apply rabbit's suggestions * Update dump to match the last nest.json.gz --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
* Add hypothesis to poetry and fuzz test the index endpoint. * Refactor import statements in algolia_test.py for consistency * Add the client ip address to the request META. * Add fuzz testing for Slack event handlers and refactor algolia fuzz testing. * Refactor fuzz tests for Slack event handlers to improve readability and structure * Fix the poetry lock file. * Remove fuzz testing from algolia_search unit tests * Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models. * Refactor to improve quality * Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration * Update poetry.lock to reflect dependency changes and version updates * Create a fuzz configuration, update docker file, makefile, and the tests. * Refactor fuzz configuration by reorganizing imports and cleaning up whitespace * Update Dockerfile and entrypoint script to use Alpine base image and shell * Run the server on port 8000 after the tests. * Create a docker compose file for fuzz testing. * Add 'graphqler' to custom dictionary * Load data from nest.json and add graphqler to cspell dict. * Remove model-bakery dependency from pyproject.toml and update poetry.lock * Update graphqler command in docker compose and the healthcheck * Update graphql command to use backend service URL in docker-compose * Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests * Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose * Update fuzz-test-backend command to abort on container exit * Add fuzz testing workflow and update image build steps * Add .env file creation step for fuzz tests in CI/CD workflow * Add Docker Hub login step for fuzz tests in CI/CD workflow * Refactor for the checks * Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling * Fix fuzz tests workflow: rename docker-compose file * Refactor fuzz-tests job. * Add environment variables for fuzz tests configuration * Update fuzz tests environment variables * Fix poetry lock file * Sort the custom-dict. * Update content hash in poetry.lock * Add docker cache mounts to the backend image * Add Redis configuration * refactor yaml * Add docker cache mounts to graphql file * Remove unnecessary chmod command for cache directories in Dockerfile * Fix poetry lock file * Add cache mounts to backend tests * Update cache mounts in graphql image * Update mkdir in graphql image * Remove duplicates * Update tests * Rename docker compose * Update poetry lock * Apply sonar * Migrate to OWASP repo * Update docker * Use graphqler maintainer docker image * Add disable permissions, update docker compose, and update entrypoint * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Update code for e2e * Add runs-on * Skip sonar and fix ci/cd * Apply rabbit suggestion and override entrypoint in ci/cd * Use env with csrf * Add timeout * Remove hypothesis and old test files * Apply rabbit's suggestions * Update ci/cd and makefile * Use digest pinning with graphqler image * Update dockerfile and fix the typeerror issue * Apply sonar suggestion * Apply sonar and rabbit suggestions * Remove cache from ci/cd * Use curl instead of wget * Separate e2e from fuzz * Update fuzz ci/cd * Update CI/CD * Run precommit * Update code * Update code * Update docs, ci/cd, and apply suggestions * Use digest pinning and parameters in workflow * Apply sonar suggestions * Apply rabbit suggestions * Run migrations for fuzz testing in ci/cd * Apply rabbit suggestions * Fix exceptions * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Add backend/data/nest.sql.gz * Update code * Automate data loading * Update dump_data to avoid exceptions * Update dump_data and automated data_loading * Update CI/CD * Update tests * Add timeout for fuzz tests * Update timeout for fuzz * Update CI/CD * Update CI/CD * Update CI/CD * Apply rabbit's suggestions * Update backend/Makefile * Update make targets and docker compose * Add volume for graphql fuzzing results and add upload artifacts in ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update docker compose and makefile * Apply rabbit's suggestions * Update dump to match the last nest.json.gz --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>
* Add hypothesis to poetry and fuzz test the index endpoint. * Refactor import statements in algolia_test.py for consistency * Add the client ip address to the request META. * Add fuzz testing for Slack event handlers and refactor algolia fuzz testing. * Refactor fuzz tests for Slack event handlers to improve readability and structure * Fix the poetry lock file. * Remove fuzz testing from algolia_search unit tests * Create a docker file for fuzz-testing, add the run commands to the MakeFile and install model_bakery for creating randomized data for all models. * Refactor to improve quality * Update fuzz testing setup: modify Makefile and Dockerfile, add entrypoint script for environment configuration * Update poetry.lock to reflect dependency changes and version updates * Create a fuzz configuration, update docker file, makefile, and the tests. * Refactor fuzz configuration by reorganizing imports and cleaning up whitespace * Update Dockerfile and entrypoint script to use Alpine base image and shell * Run the server on port 8000 after the tests. * Create a docker compose file for fuzz testing. * Add 'graphqler' to custom dictionary * Load data from nest.json and add graphqler to cspell dict. * Remove model-bakery dependency from pyproject.toml and update poetry.lock * Update graphqler command in docker compose and the healthcheck * Update graphql command to use backend service URL in docker-compose * Refactor docker-compose to build graphqler service from Dockerfile and add entrypoint script for fuzzing tests * Enhance fuzz testing setup: update Dockerfile and entrypoint scripts, improve CSRF handling, and refine healthcheck command in docker-compose * Update fuzz-test-backend command to abort on container exit * Add fuzz testing workflow and update image build steps * Add .env file creation step for fuzz tests in CI/CD workflow * Add Docker Hub login step for fuzz tests in CI/CD workflow * Refactor for the checks * Refactor fuzz testing workflow: replace Docker Hub login with buildx setup and update docker-compose handling * Fix fuzz tests workflow: rename docker-compose file * Refactor fuzz-tests job. * Add environment variables for fuzz tests configuration * Update fuzz tests environment variables * Fix poetry lock file * Sort the custom-dict. * Update content hash in poetry.lock * Add docker cache mounts to the backend image * Add Redis configuration * refactor yaml * Add docker cache mounts to graphql file * Remove unnecessary chmod command for cache directories in Dockerfile * Fix poetry lock file * Add cache mounts to backend tests * Update cache mounts in graphql image * Update mkdir in graphql image * Remove duplicates * Update tests * Rename docker compose * Update poetry lock * Apply sonar * Migrate to OWASP repo * Update docker * Use graphqler maintainer docker image * Add disable permissions, update docker compose, and update entrypoint * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Update code for e2e * Add runs-on * Skip sonar and fix ci/cd * Apply rabbit suggestion and override entrypoint in ci/cd * Use env with csrf * Add timeout * Remove hypothesis and old test files * Apply rabbit's suggestions * Update ci/cd and makefile * Use digest pinning with graphqler image * Update dockerfile and fix the typeerror issue * Apply sonar suggestion * Apply sonar and rabbit suggestions * Remove cache from ci/cd * Use curl instead of wget * Separate e2e from fuzz * Update fuzz ci/cd * Update CI/CD * Run precommit * Update code * Update code * Update docs, ci/cd, and apply suggestions * Use digest pinning and parameters in workflow * Apply sonar suggestions * Apply rabbit suggestions * Run migrations for fuzz testing in ci/cd * Apply rabbit suggestions * Fix exceptions * Establish an e2e backend instance locally and in CI/CD (#2429) * Add dockerfile and compose and update makefile * Dump db-data * Reorder makefile * Dump db-data * Remove e2e dockerfile * Add CI/CD * Apply rabbit's suggestions * Add postgres instead of pgvector * Remove needs * Update envs * Fix migrations step * Add envs to docker * Remove migrations step * Remove --without test from dockerfile * Copy tests in dockerfile and add needs to gh workflow * Update dockerfile * Apply suggestion * Use the e2e instance in the frontend e2e tests * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD and dockerfile * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Update CI/CD * Restore needs for the job * Update Makefiles * Update docs * Apply suggestions * Update load-data * Skip sonar error * update port numbers * Update docs * Update code * Add required shell property back * Bump config.webServer timeout * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Fix running e2e backend (#2710) * Add DB environment variables to e2e.yaml and add csrf_decorate function * Skip sonar suggestion * Fix rest api internal error * Add timeout * Update docs * Update code * Revert csrf update * Add command to dump local data * Update dump and load data * Update rest api config and docs * Apply check-spelling * Use .env.e2e.example for frontend e2e tests in gh actions * Apply rabbit's suggestions * Migrate dump_data to django command and dump owasp, github, and slack tables only * Apply rabbit's suggestions * Update code * Refactor dump_data * Use local cache for e2e * Remove old load_data command * Add tests * Skip sonar * Apply rabbit suggestions * Update code --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org> * Add backend/data/nest.sql.gz * Update code * Automate data loading * Update dump_data to avoid exceptions * Update dump_data and automated data_loading * Update CI/CD * Update tests * Add timeout for fuzz tests * Update timeout for fuzz * Update CI/CD * Update CI/CD * Update CI/CD * Apply rabbit's suggestions * Update backend/Makefile * Update make targets and docker compose * Add volume for graphql fuzzing results and add upload artifacts in ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update ci/cd * Update docker compose and makefile * Apply rabbit's suggestions * Update dump to match the last nest.json.gz --------- Co-authored-by: Arkadii Yakovets <arkadii.yakovets@owasp.org>



Resolves #1075