-
-
Notifications
You must be signed in to change notification settings - Fork 235
Feature/project level compliance check #2415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/project level compliance check #2415
Conversation
… flag; stabilize date formatting and mock @heroui/button for tests
Implement periodic background job to verify OWASP project levels against the official source of truth (project_levels.json). Changes: - Add is_level_compliant boolean field to Project model - Create owasp_check_project_level_compliance management command - Update health score calculation to include 8-point compliance weight - Add comprehensive test coverage - Schedule daily cron job (05:27) after project sync - Add Makefile target for manual execution
|
@arkid15r Please check if this works, for gssoc 25 |
Summary by CodeRabbit
WalkthroughAdds OWASP project-level compliance: new management command, model field and migration, scheduled cron entry, and score adjustment. Also updates backend/frontend build tooling (conditional Docker cache, pre-codegen), enhances GraphQL codegen resilience, tightens Jest/test setup, converts date formatting to UTC, and refactors e2e tests to use a helper. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/utils/dateFormatter.ts (1)
31-41: Fix inconsistent use of local time methods in UTC-based date comparisons.The date comparison logic uses local time methods (
getFullYear(),getMonth(),getDate()) while the rest of the function uses UTC methods. This inconsistency can cause incorrect date range formatting when dates cross timezone boundaries.For example, two dates that represent the same day in UTC might be treated as different days if compared using local time methods in a different timezone.
Apply this diff to use UTC methods consistently:
if ( start.getTime() === end.getTime() || - (start.getFullYear() === end.getFullYear() && - start.getMonth() === end.getMonth() && - start.getDate() === end.getDate()) + (start.getUTCFullYear() === end.getUTCFullYear() && + start.getUTCMonth() === end.getUTCMonth() && + start.getUTCDate() === end.getUTCDate()) ) { return formatDate(startDate) } - const sameMonth = start.getMonth() === end.getMonth() && start.getFullYear() === end.getFullYear() - const sameYear = start.getFullYear() === end.getFullYear() + const sameMonth = start.getUTCMonth() === end.getUTCMonth() && start.getUTCFullYear() === end.getUTCFullYear() + const sameYear = start.getUTCFullYear() === end.getUTCFullYear()
🧹 Nitpick comments (6)
frontend/graphql-codegen.ts (2)
28-167: Ensure fallback schema is kept in sync with the backend schema.The comprehensive fallback schema enables offline codegen, which is valuable for CI/CD and development scenarios. However, this creates a maintenance burden as the schema must be manually updated when the backend schema changes.
Consider implementing a strategy to keep the fallback schema in sync:
- Automated validation: Add a CI check that compares the fallback schema types with the real backend schema and fails if they drift.
- Schema extraction: Generate the fallback schema from the real schema using a script, extracting only the types used by tests.
- Documentation: Add a comment above the fallback schema noting which queries/mutations it must support and instructions for updating it.
Example validation script:
#!/bin/bash # Verify fallback schema includes required types for tests # Start backend temporarily # Fetch real schema # Compare with fallback schema # Exit 1 if critical types are missingWould you like me to help generate a schema validation or extraction script?
219-230: Confirm/status/endpoint and add tests
- The
/status/route is defined inbackend/settings/urls.py→get_status.- Add automated tests to validate backend-down, backend-up without CSRF, and backend-up with CSRF scenarios.
frontend/package.json (1)
18-18: Consider impact of removing DEP0040 warning suppression.Removing
--no-warnings=DEP0040will surface deprecation warnings for thepunycodemodule if any dependencies use it. Verify this is intentional and won't clutter test output.backend/tests/apps/owasp/management/commands/owasp_update_project_health_scores_test.py (2)
12-13: Consider extracting the compliance weight to a shared constant.The
LEVEL_COMPLIANCE_WEIGHTvalue (8.0) is duplicated in both the test file and the implementation (owasp_update_project_health_scores.py, line 27). This duplication creates a maintenance risk if the weight needs to be changed.Consider extracting this to a shared configuration constant:
# In apps/owasp/constants.py or similar LEVEL_COMPLIANCE_WEIGHT = 8.0Then import it in both files:
from apps.owasp.constants import LEVEL_COMPLIANCE_WEIGHT
88-179: Reduce test code duplication.The
fields_weightsdictionary setup is duplicated across three test methods (lines 90-106, 133-149, and implicitly in the original test at lines 40-56). This makes the tests harder to maintain.Consider extracting the common setup to a fixture or class-level constant:
class TestUpdateProjectHealthMetricsScoreCommand: + FIELDS_WEIGHTS = { + "age_days": (5, 6), + "contributors_count": (5, 6), + # ... rest of fields + } + @pytest.fixture(autouse=True) def _setup(self): """Set up test environment."""Then reference
self.FIELDS_WEIGHTSin each test method.backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (1)
42-42: Move import to module level.The
jsonimport is placed inside thetryblock. While this works, it's unconventional and may confuse readers.Move the import to the top of the file with other imports:
import logging +import json from urllib.request import urlopen
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
frontend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlfrontend/src/types/__generated__/graphql.tsis excluded by!**/__generated__/**
📒 Files selected for processing (19)
backend/Makefile(1 hunks)backend/apps/owasp/Makefile(1 hunks)backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py(1 hunks)backend/apps/owasp/management/commands/owasp_update_project_health_scores.py(2 hunks)backend/apps/owasp/migrations/0053_project_is_level_compliant.py(1 hunks)backend/apps/owasp/models/project.py(1 hunks)backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py(1 hunks)backend/tests/apps/owasp/management/commands/owasp_update_project_health_scores_test.py(2 hunks)cron/production(1 hunks)frontend/.npmrc(1 hunks)frontend/Makefile(1 hunks)frontend/__tests__/e2e/pages/ProjectDetails.spec.ts(2 hunks)frontend/__tests__/e2e/pages/ProjectsHealthDashboardMetrics.spec.ts(1 hunks)frontend/__tests__/e2e/pages/UserDetails.spec.ts(1 hunks)frontend/graphql-codegen.ts(3 hunks)frontend/jest.setup.ts(2 hunks)frontend/package.json(3 hunks)frontend/src/components/ProgramCard.tsx(1 hunks)frontend/src/utils/dateFormatter.ts(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-30T12:52:32.214Z
Learnt from: rudransh-shrivastava
PR: OWASP/Nest#2155
File: frontend/graphql-codegen.ts:52-52
Timestamp: 2025-08-30T12:52:32.214Z
Learning: When using GraphQL Codegen with async configuration that needs to fetch data (like CSRF tokens), export the async function directly instead of using top-level await: `export default async function()` rather than `export default await function()`.
Applied to files:
frontend/graphql-codegen.ts
🧬 Code graph analysis (6)
frontend/__tests__/e2e/pages/ProjectsHealthDashboardMetrics.spec.ts (1)
frontend/__tests__/unit/data/mockProjectsHealthMetricsData.ts (1)
mockHealthMetricsData(1-25)
backend/tests/apps/owasp/management/commands/owasp_update_project_health_scores_test.py (3)
backend/apps/owasp/models/project_health_metrics.py (1)
ProjectHealthMetrics(16-235)backend/apps/owasp/models/project_health_requirements.py (1)
ProjectHealthRequirements(9-63)backend/apps/owasp/models/project.py (2)
is_funding_requirements_compliant(148-150)is_leader_requirements_compliant(153-157)
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (2)
backend/apps/owasp/models/project.py (2)
Project(31-378)bulk_save(345-353)backend/apps/owasp/management/commands/owasp_update_project_health_scores.py (2)
Command(9-76)handle(12-76)
backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py (2)
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (4)
Command(16-203)fetch_official_project_levels(33-54)normalize_level(56-76)add_arguments(19-31)backend/apps/owasp/models/project.py (1)
Project(31-378)
frontend/__tests__/e2e/pages/UserDetails.spec.ts (2)
frontend/__tests__/e2e/helpers/mockDashboardCookies.ts (1)
mockDashboardCookies(1-74)frontend/__tests__/unit/data/mockUserDetails.ts (1)
mockUserDetailsData(1-66)
frontend/__tests__/e2e/pages/ProjectDetails.spec.ts (2)
frontend/__tests__/e2e/helpers/mockDashboardCookies.ts (1)
mockDashboardCookies(1-74)frontend/__tests__/unit/data/mockProjectDetailsData.ts (1)
mockProjectDetailsData(1-128)
🔇 Additional comments (29)
frontend/.npmrc (1)
1-1: Verify the need to disable engine-strict checking.Setting
engine-strict=falseallows the package manager to proceed even when the Node.js version doesn't match theenginesfield inpackage.json. While this provides flexibility for developers using different Node versions, it can mask compatibility issues that would otherwise surface early.Confirm that:
- The project intentionally supports a range of Node versions beyond what's specified in
engines.- CI/CD environments are configured to test against the officially supported Node versions.
- This setting won't hide version-specific bugs or deprecated API usage.
If the intent is to support multiple Node versions, consider documenting the supported range in the README or CONTRIBUTING guide.
frontend/src/components/ProgramCard.tsx (1)
16-22: LGTM! UTC timezone ensures consistent date formatting.Adding
timeZone: 'UTC'prevents timezone-related inconsistencies in date display and aligns with the broader UTC standardization effort across the codebase.frontend/src/utils/dateFormatter.ts (1)
46-55: LGTM! UTC methods ensure consistent date range formatting.The use of
getUTCDate(),getUTCFullYear(), andtimeZone: 'UTC'ensures that date ranges are formatted consistently regardless of the user's local timezone.frontend/__tests__/e2e/pages/ProjectsHealthDashboardMetrics.spec.ts (1)
23-25: LGTM! Improved test reliability with network idle wait and extended timeout.These changes follow e2e testing best practices:
waitForLoadState('networkidle')ensures the page has fully loaded and all network requests have completed before assertions.- The 10-second timeout accommodates slower CI environments and reduces flakiness.
frontend/jest.setup.ts (3)
4-6: LGTM! UTC timezone ensures deterministic test behavior.Setting
TZ=UTCin the test environment prevents timezone-related flakiness and aligns with the UTC-based date formatting implemented across the codebase. The feature flag enables consistent testing of the project health feature.
8-21: LGTM! Mock addresses framer-motion compatibility issues in tests.Mocking
@heroui/buttonwith a simple React button avoids VM module issues with framer-motion's dynamic imports in the Jest environment. The mock properly handles bothonClickandonPressprops, maintaining compatibility with the HeroUI API.
88-98: Verify that error filtering doesn't hide legitimate issues.The console.error filtering to ignore React 19 unknown prop warnings (e.g.,
disableAnimation) is a pragmatic workaround for library compatibility issues. However, ensure that this pattern doesn't inadvertently suppress legitimate errors.Consider:
- Monitoring for new unknown prop warnings that might indicate misuse of components.
- Removing this filter once the UI library is fully compatible with React 19.
- Being more specific with the warning pattern if possible (e.g., checking for specific prop names like
disableAnimation).Would you like me to suggest a more targeted filter that only ignores known safe warnings?
frontend/Makefile (2)
61-61: LGTM! Conditional cache-from prevents build errors.The conditional
--cache-fromflag only applies when the image exists, preventing build failures when the cache image is not available. This is particularly useful in CI environments and for new developers.
67-69: LGTM! Pre-generating GraphQL types ensures test reliability.Running
graphql-codegenbefore unit tests ensures that generated types are available, with a fallback retry if the first attempt fails (useful when backend is unavailable). The conditional--cache-fromflag follows the same pattern as the e2e target, maintaining consistency.frontend/__tests__/e2e/pages/ProjectDetails.spec.ts (3)
3-10: Verify the broadened route pattern doesn't match unintended URLs.The refactor to use
mockDashboardCookiesreduces boilerplate, which is good. However, the route pattern changed from'**/graphql/'to'**/graphql*', which is broader and could potentially match URLs like/graphql-somethingor/graphql.json.Confirm that:
- No other URLs in the application start with
/graphqlbesides the GraphQL endpoint.- The test still behaves as expected with the broader pattern.
If there's a risk of matching unintended URLs, consider using a more specific pattern like
'**/graphql/**'or keeping the original'**/graphql/'.
12-12: LGTM! Network idle wait improves test reliability.Adding
waitForLoadState('networkidle')ensures the page has fully loaded before assertions, reducing flakiness in CI environments.
31-31: LGTM! Extended timeout accommodates slower environments.The 10-second timeout for the visibility check helps prevent failures in slower CI environments or under heavy load.
frontend/graphql-codegen.ts (3)
9-26: LGTM! Robust backend detection and CSRF token handling.The implementation properly detects backend availability via the status endpoint and conditionally fetches the CSRF token. The error handling gracefully degrades:
- Backend down → use fallback schema
- Backend up but CSRF fails → proceed without CSRF headers
- Backend up with CSRF → use full auth headers
169-172: LGTM! Conditional documents enable offline mode.Setting
documentsto an empty array when the backend is down prevents codegen from scanning source files unnecessarily while still allowing type generation from the fallback schema.
184-187: LGTM! Consistent scalar mappings for DateTime and UUID.Adding explicit scalar mappings for
DateTimeandUUIDtostringensures type safety across the generated types. The mappings are correctly applied in both the operations config and the base types config.Also applies to: 209-212
frontend/__tests__/e2e/pages/UserDetails.spec.ts (2)
11-12: LGTM: Navigation improvements.Adding the leading slash makes the path explicit and absolute, and the
networkidlewait ensures the page is fully loaded before assertions run. These are good practices for stable e2e tests.
15-20: LGTM: Extended timeouts for heading assertions.Adding explicit timeouts to heading assertions improves test stability by allowing more time for elements to render. This is consistent with the
networkidlewait added to the setup.frontend/package.json (1)
84-106: Verify tests under Jest v30 with @swc/jestJest upgraded from v29→v30, ts-jest removed (no stale references), and @swc/jest set as the transformer in jest.config.ts. Run the full test suite to confirm compatibility and catch any breaking changes.
cron/production (1)
3-3: Scheduling looks consistent.Runs after the existing 05:17 sync and logs alongside other OWASP jobs; no issues spotted.
backend/Makefile (1)
117-119: Nice defensive cache check.Conditionally adding
--cache-fromavoids hard failures when the image is missing while still using the cache when available.backend/apps/owasp/management/commands/owasp_update_project_health_scores.py (1)
34-66: Bonus application reads clearly.The inline weight and explicit compliant-only bonus make the penalty intent obvious; thanks for keeping it localized.
backend/apps/owasp/Makefile (1)
5-7: Target wiring matches the rest.Command echo + exec pattern is consistent with existing OWASP targets.
backend/apps/owasp/models/project.py (1)
93-97: Field addition looks good.Defaulting to
Truepreserves existing data while documenting the new flag clearly.backend/apps/owasp/migrations/0053_project_is_level_compliant.py (1)
12-20: Migration mirrors the model change.Everything lines up with the field definition; no concerns.
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (5)
1-14: LGTM - Clear module setup.The imports and constants are well-organized. The
PROJECT_LEVELS_URLis appropriately defined as a module-level constant.
56-76: LGTM - Robust level normalization.The normalization logic correctly handles plurals, case variations, and unknown levels. The fallback to "other" is appropriate.
78-107: LGTM - Clear initialization and error handling.The setup correctly handles the dry-run mode, provides user feedback, and aborts gracefully if official levels cannot be fetched.
108-165: LGTM - Correct compliance detection logic.The compliance checking logic is well-structured:
- Projects not in the official list are marked non-compliant
- Projects with level mismatches are marked non-compliant
- Projects with matching levels are marked compliant
- Changes are tracked for database updates
- Counters correctly track the compliance status of all projects
The conditional increments of
non_compliant_countensure each project is counted exactly once, regardless of whether its status changed or remained the same.
166-203: LGTM - Efficient bulk updates and comprehensive reporting.The database update logic correctly:
- Uses bulk operations for efficiency
- Respects the dry-run flag
- Provides clear feedback about changes
- Generates a detailed summary with proper categorization of non-compliant projects (not in list vs. level mismatch)
The calculation
mismatch_count = non_compliant_count - not_in_official_list_countcorrectly isolates projects with level mismatches from those missing entirely from the official list.
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py
Show resolved
Hide resolved
backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py
Outdated
Show resolved
Hide resolved
- Add safeguard for empty official project list in compliance command - Fix pytest fixture incompatibility in backend tests (use setUp instead) - Remove duplicate GraphQL route handler in E2E test
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
frontend/__tests__/e2e/pages/UserDetails.spec.ts (1)
3-3: LGTM: Critical issue resolved with helper usage.The test now correctly uses the
mockDashboardCookieshelper, which eliminates the duplicate GraphQL route handler that was flagged in the previous review. This consolidates route mocking and cookie setup in one place, improving maintainability.Also applies to: 7-7
🧹 Nitpick comments (1)
frontend/__tests__/e2e/pages/UserDetails.spec.ts (1)
12-12: Consider removing or standardizing the explicit timeouts.The 10-second timeouts on heading assertions are inconsistent—only two assertions have them while others don't. If the
networkidlewait on line 9 is functioning correctly, these explicit timeouts may be unnecessary. If they're needed, consider applying them to all assertions for consistency or investigate why these specific elements require extra wait time.Also applies to: 17-17
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py(1 hunks)backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py(1 hunks)frontend/__tests__/e2e/pages/UserDetails.spec.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
frontend/__tests__/e2e/pages/UserDetails.spec.ts (2)
frontend/__tests__/e2e/helpers/mockDashboardCookies.ts (1)
mockDashboardCookies(1-74)frontend/__tests__/unit/data/mockUserDetails.ts (1)
mockUserDetailsData(1-66)
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (1)
backend/apps/owasp/models/project.py (2)
Project(31-378)bulk_save(345-353)
backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py (3)
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.py (4)
Command(16-211)fetch_official_project_levels(33-62)normalize_level(64-84)add_arguments(19-31)backend/apps/owasp/models/enums/project.py (1)
ProjectLevel(37-44)backend/apps/owasp/models/project.py (1)
Project(31-378)
🔇 Additional comments (1)
frontend/__tests__/e2e/pages/UserDetails.spec.ts (1)
8-9: LGTM: Navigation and synchronization improved.The leading slash in
goto('/members/test-user')follows absolute path conventions, and thenetworkidlewait ensures the page is fully loaded before assertions run.
| @pytest.mark.parametrize( | ||
| ("input_level", "expected_level"), | ||
| [ | ||
| ("Incubator", "incubator"), | ||
| ("LAB", "lab"), | ||
| ("Production", "production"), | ||
| ("flagship", "flagship"), | ||
| ("Labs", "lab"), # Plural form | ||
| ("Incubators", "incubator"), # Plural form | ||
| ("Unknown", "other"), # Unknown level | ||
| ("", "other"), # Empty string | ||
| ], | ||
| ) | ||
| def test_normalize_level(self, input_level, expected_level): | ||
| """Test level normalization with various inputs.""" | ||
| result = self.command.normalize_level(input_level) | ||
| assert result == expected_level | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incompatible pytest parametrization on TestCase.
pytest.mark.parametrize doesn’t work on django.test.TestCase methods—the unittest runner will still call the method with only self, raising TypeError: test_normalize_level() missing 2 required positional arguments. Convert this into a loop (e.g., using self.subTest) inside a no-argument test method instead of relying on parametrization.
- @pytest.mark.parametrize(
- ("input_level", "expected_level"),
- [
- ("Incubator", "incubator"),
- ("LAB", "lab"),
- ("Production", "production"),
- ("flagship", "flagship"),
- ("Labs", "lab"), # Plural form
- ("Incubators", "incubator"), # Plural form
- ("Unknown", "other"), # Unknown level
- ("", "other"), # Empty string
- ],
- )
- def test_normalize_level(self, input_level, expected_level):
- """Test level normalization with various inputs."""
- result = self.command.normalize_level(input_level)
- assert result == expected_level
+ def test_normalize_level(self):
+ """Test level normalization with various inputs."""
+ cases = [
+ ("Incubator", "incubator"),
+ ("LAB", "lab"),
+ ("Production", "production"),
+ ("flagship", "flagship"),
+ ("Labs", "lab"), # Plural form
+ ("Incubators", "incubator"), # Plural form
+ ("Unknown", "other"), # Unknown level
+ ("", "other"), # Empty string
+ ]
+ for input_level, expected_level in cases:
+ with self.subTest(level=input_level):
+ assert self.command.normalize_level(input_level) == expected_level📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.parametrize( | |
| ("input_level", "expected_level"), | |
| [ | |
| ("Incubator", "incubator"), | |
| ("LAB", "lab"), | |
| ("Production", "production"), | |
| ("flagship", "flagship"), | |
| ("Labs", "lab"), # Plural form | |
| ("Incubators", "incubator"), # Plural form | |
| ("Unknown", "other"), # Unknown level | |
| ("", "other"), # Empty string | |
| ], | |
| ) | |
| def test_normalize_level(self, input_level, expected_level): | |
| """Test level normalization with various inputs.""" | |
| result = self.command.normalize_level(input_level) | |
| assert result == expected_level | |
| def test_normalize_level(self): | |
| """Test level normalization with various inputs.""" | |
| cases = [ | |
| ("Incubator", "incubator"), | |
| ("LAB", "lab"), | |
| ("Production", "production"), | |
| ("flagship", "flagship"), | |
| ("Labs", "lab"), # Plural form | |
| ("Incubators", "incubator"), # Plural form | |
| ("Unknown", "other"), # Unknown level | |
| ("", "other"), # Empty string | |
| ] | |
| for input_level, expected_level in cases: | |
| with self.subTest(level=input_level): | |
| assert self.command.normalize_level(input_level) == expected_level |
🤖 Prompt for AI Agents
In
backend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.py
around lines 82 to 99, the pytest.mark.parametrize decorator is incompatible
with django.test.TestCase methods and causes a TypeError; remove the decorator
and convert the parametrized cases into an explicit loop inside a no-argument
test method (use self.subTest for each (input_level, expected_level) pair), call
self.command.normalize_level(input_level) and assert equality against
expected_level for each subTest so the unittest runner can execute the test
correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @eshentials
please note that your PR has code conflicts.
Moreover, we have a specific workflow that defines how we handle issues/PRs. You need to be assigned for the related issue first (before starting your work on it).



Proposed change
Resolves #2039
This PR implements a periodic background job that fetches the latest project levels from the official OWASP source of truth and identifies non-compliant projects for score penalties.
Overview
Implements automated project level compliance checking against the official OWASP project_levels.json. Non-compliant projects are flagged and receive score penalties without modifying their locally stored levels.
Key Changes
1. Database Schema
is_level_compliantboolean field toProjectmodel (defaults toTrue)0053_project_is_level_compliant.py2. New Management Command:
owasp_check_project_level_complianceFeatures:
--dry-runmode for safe testing3. Score Calculation Enhancement
level_compliance_weight = 8.0constant4. Comprehensive Test Suite
New test file:
owasp_check_project_level_compliance_test.pyUpdated test file:
owasp_update_project_health_scores_test.py5. Automation & Scheduling
/var/log/nest/production/check-project-level-compliance.logmake owasp-check-project-level-complianceUsage
Example Output
Starting project level compliance check...
✓ Fetched 150 project levels from official source
Checking 200 active projects...
Project Alpha (www-project-alpha): Level mismatch - Local: production, Official: lab
Project Beta (www-project-beta): Not found in official list
COMPLIANCE SUMMARY:
Total Projects: 200
✓ Compliant: 185
Non-Compliant: 15
Not in official list: 5
Level mismatch: 10
Changes Applied: 15
✓ Project level compliance check completed.
Acceptance Criteria Met
is_level_compliantfield📁 Files Changed (8 files, 605 insertions)
New Files:
backend/apps/owasp/management/commands/owasp_check_project_level_compliance.pybackend/apps/owasp/migrations/0053_project_is_level_compliant.pybackend/tests/apps/owasp/management/commands/owasp_check_project_level_compliance_test.pyModified Files:
backend/apps/owasp/models/project.pybackend/apps/owasp/management/commands/owasp_update_project_health_scores.pybackend/tests/apps/owasp/management/commands/owasp_update_project_health_scores_test.pybackend/apps/owasp/Makefilecron/productionImpact on Scoring
Maximum possible score remains 100 points.
🔍 Technical Details
urllib.request.urlopenfor fetching official dataChecklist
make check-testlocally; all checks and tests passed.