feat: Automatic discovery of LLM files and sitemaps#444
Closed
feat: Automatic discovery of LLM files and sitemaps#444
Conversation
Implements GitHub issue #430 with comprehensive file discovery system: 🎯 Core Features: - FileDiscoveryService: Discovers llms.txt, sitemaps, and metadata files - Priority-based LLM file selection (llms-full.txt > llms-ctx.txt > llms.md > llms.txt) - Database-driven configuration with fallback defaults - Enhanced URL handler with LLM file detection - Seamless crawling service integration 🔧 Discovery Logic: - LLM files take highest priority and stop regular crawling - Robots.txt sitemap extraction with fallback support - Wildcard sitemap pattern support (sitemap-*.xml) - Metadata file discovery (.well-known directory) - Concurrent discovery operations with timeout handling ⚡ Performance Optimizations: - Early return when LLM files found (no redundant crawling) - HEAD requests for file existence checks - 10-second discovery timeout with graceful fallback - Progress reporting integration 🛠️ Technical Implementation: - Database settings: CRAWL_DISCOVERY_LLM_FILES, CRAWL_DISCOVERY_SITEMAP_FILES, CRAWL_DISCOVERY_METADATA_FILES - Enhanced URLHandler.is_sitemap() and new is_llm_file() methods - Comprehensive test suite with 28+ test cases - Error handling with fallback to regular crawling 🎉 Result: LLM files now replace regular website crawling for optimal AI content consumption 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
- Clear 📋 CRAWLING DECISION logs showing which content source is used - 🚀 STARTING CRAWL logs showing exactly what URLs will be crawled - Fallback logging for regular website crawling - Makes it crystal clear in logs which discovery method was chosen 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Collaborator
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Pull Request
Summary
Implements automatic discovery and parsing of
llms.txt,sitemap.xml, and related files to enhance crawling capabilities for AI-driven content consumption. Resolves GitHub issue #430 by adding a comprehensive file discovery system that prioritizes LLM files over regular website crawling for optimal AI content consumption.Changes Made
FileDiscoveryServicewith database-driven configuration and fallback defaultsllms-full.txt>llms-ctx.txt>llms.md>llms.txt)URLHandlerwithis_llm_file()method and improved sitemap detectionCrawlingServicewith early return logic to stop regular crawlingType of Change
Affected Services
Testing
Test Evidence
Checklist
Breaking Changes
None. This feature is fully backwards compatible. Existing crawling behavior is preserved when discovery is disabled or fails.
Additional Notes
🎯 Discovery Priority Logic
📊 Performance Impact
🔧 Configuration
New database settings (configurable via admin UI):
{ "CRAWL_DISCOVERY_LLM_FILES": ["llms-full.txt", "llms-ctx.txt", "llms.md", "llms.txt"], "CRAWL_DISCOVERY_SITEMAP_FILES": ["sitemap.xml", "sitemap_index.xml", "sitemap-*.xml"], "CRAWL_DISCOVERY_METADATA_FILES": ["robots.txt", ".well-known/security.txt", "humans.txt"] }🐛 Fixes Applied
🧪 Test Coverage
🤖 Generated with Claude Code