-
Notifications
You must be signed in to change notification settings - Fork 0
Fix CI test failures due to FluentAssertions errors and missing AI service handling #58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CI test failures due to FluentAssertions errors and missing AI service handling #58
Conversation
Co-authored-by: kbeaugrand <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes critical CI pipeline failures by addressing FluentAssertions compilation errors and implementing robust handling for missing AI services in CI environments.
- Fixed FluentAssertions method name compilation errors that were breaking builds
- Improved configuration flexibility with fallbacks to environment variables and optional settings files
- Added intelligent AI service detection that gracefully skips integration tests when services are unavailable
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| appsettings.json | Updated deployment name and added CI guidance comments |
| README.md | Added comprehensive test configuration documentation |
| LMRankerPerformanceTests.cs | Enhanced configuration handling and AI service availability detection |
| LMRankerIntegrationTests.cs | Fixed FluentAssertions errors and improved service detection |
| LMRankerDebugTests.cs | Applied consistent configuration improvements |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| var testDocs = new[] { "test document" }; | ||
| await foreach (var result in _ranker.ScoreAsync("test", CreateAsyncEnumerable(testDocs))) | ||
| { | ||
| return result.Item2 > 0; // If we get a meaningful score, the service is working |
Copilot
AI
Sep 27, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic assumes any positive score indicates a working service, but legitimate scores can be 0 or negative. Consider checking for !double.IsNaN(result.Item2) or a more appropriate threshold based on your scoring model.
| return result.Item2 > 0; // If we get a meaningful score, the service is working | |
| return !double.IsNaN(result.Item2); // If we get a valid score, the service is working |
| var testDocs = new[] { "test document" }; | ||
| await foreach (var result in _ranker.ScoreAsync("test", CreateAsyncEnumerable(testDocs))) | ||
| { | ||
| return result.Item2 > 0; // If we get a meaningful score, the service is working |
Copilot
AI
Sep 27, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic assumes any positive score indicates a working service, but legitimate scores can be 0 or negative. Consider checking for !double.IsNaN(result.Item2) or a more appropriate threshold based on your scoring model.
| return result.Item2 > 0; // If we get a meaningful score, the service is working | |
| return !double.IsNaN(result.Item2); // If we get a valid score, the service is working |
| var testDocs = new[] { "test document" }; | ||
| await foreach (var result in _ranker.ScoreAsync("test", CreateAsyncEnumerable(testDocs))) | ||
| { | ||
| return result.Item2 > 0; // If we get a meaningful score, the service is working |
Copilot
AI
Sep 27, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic assumes any positive score indicates a working service, but legitimate scores can be 0 or negative. Consider checking for !double.IsNaN(result.Item2) or a more appropriate threshold based on your scoring model.
| return result.Item2 > 0; // If we get a meaningful score, the service is working | |
| return !double.IsNaN(result.Item2); // If we get a valid score, the service is working |
This PR fixes critical CI pipeline failures in the unit test suite caused by compilation errors and improper handling of missing AI services in CI environments.
Issues Fixed
1. Compilation Errors
The build was failing with 5 FluentAssertions method name errors:
2. Configuration Robustness
The test configuration was too rigid for CI environments:
3. AI Service Dependency Issues
Integration tests were failing in CI environments when no AI services were available, instead of gracefully skipping:
Key Improvements
Graceful CI Behavior: Tests now automatically skip when no AI service is available, rather than failing
Environment Flexibility: Configuration supports both local development (with AI services) and CI environments (without)
Comprehensive Error Handling: Added try-catch blocks around service initialization and proper timeout handling
Better Documentation: Updated
appsettings.jsonwith clear comments and created detailed README for test configurationTest Results
The changes ensure reliable CI pipeline execution while maintaining complete test coverage for environments where AI services are configured.
Original prompt
💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.