No Overhead, Just Output
AI-Driven LLM Selection • Complexity Analysis • Cost-Efficient Compression • Multi-Provider Support
Nadir is an intelligent LLM selection framework that dynamically chooses the best AI model for a given prompt based on:
- 🚀 Complexity Analysis: Evaluates text structure, difficulty, and token usage.
- ⚡ Multi-Provider Support: Works with OpenAI, Anthropic, Gemini, and Hugging Face models.
- 💰 Cost & Speed Optimization: Balances model accuracy, response time, and pricing.
- 🔄 Adaptive Compression: Reduces token usage via truncation, keyword extraction, or AI-powered compression.
- Tailored Performance: The right LLM understands the nuances in your detailed prompts, delivering responses that are both precise and insightful.
- Empowered Creativity: When your prompts are crafted with depth, the LLM becomes an extension of your vision, helping you explore ideas and solve problems innovatively.
- Maximized Impact: Strategic LLM selection ensures that every dollar spent translates into greater creative output and operational efficiency.
- Guiding Detail: Complex prompts provide rich context and clear instructions, steering the LLM towards high-quality, context-aware responses.
- Enhanced Innovation: Detailed prompts allow the LLM to process multi-step reasoning and intricate logic, unlocking layers of creativity that simple prompts might miss.
- Precision and Insight: When you invest in crafting thoughtful, detailed prompts, you set the stage for the LLM to deliver outputs that elevate your work to the next level.
- Invest Wisely: Advanced LLMs excel with complex prompts but come at a higher cost. The key is to find the right balance that meets your needs without overspending.
- Optimize Your Approach: Start with simple prompts to gauge performance, then gradually introduce more complexity as needed. This iterative approach ensures you get the best value for your investment.
- Maximize ROI: By aligning the depth of your prompts with the appropriate LLM, you achieve optimal efficiency—harnessing the full power of AI while managing expenses effectively.
- Dynamic Model Selection: Automatically choose the best LLM for any given task based on complexity and cost thresholds.
- Cost Optimization: Minimize token usage and costs with intelligent prompt compression.
- Multi-Provider Support: Seamless integration with OpenAI, Anthropic, Google Gemini, and Hugging Face.
- Extensible Design: Add your own complexity analyzers, compression strategies, or new providers effortlessly.
- Rich Insights: Generate detailed metrics on token usage, costs, and model performance.

Install Nadir using pip:
pip install nadir-llm
Create a .env file to store your API keys:
# .env file
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_google_ai_key
HUGGINGFACE_API_KEY=your_huggingface_api_key
from nadir.llm_selector.selector.auto import AutoSelector
nadir = AutoSelector()
prompt = "Explain quantum entanglement in simple terms."
response = nadir.generate_response(prompt)
print(response)
complexity_details = nadir.get_complexity_details("What is the speed of light in vacuum?")
print(complexity_details)
models = nadir.list_available_models()
print(models)
from nadir.complexity.llm import LLMComplexityAnalyzer
from nadir.llm_selector.selector.auto import AutoSelector
# Initialize the LLM-based complexity analyzer
complexity_analyzer = LLMComplexityAnalyzer()
# Sample Python code
code_snippet = """
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n-1)
"""
# Get detailed complexity metrics
complexity_details = complexity_analyzer.get_complexity_details(code_snippet)
print("Complexity Details:", complexity_details)
# Initialize Nadir and dynamically select the best model
nadir = AutoSelector(complexity_analyzer=complexity_analyzer)
selected_model = nadir.select_model(code_snippet)
print("Selected Model:", selected_model.name)
from nadir.compression import GeminiCompressor
from nadir.llm_selector.selector.auto import AutoSelector
# Initialize Gemini-based prompt compression
compressor = GeminiCompressor()
# A very long prompt
long_prompt = """
Machine learning models require extensive preprocessing and feature engineering.
However, feature selection techniques vary widely based on the type of data.
For example, in text-based datasets, TF-IDF, word embeddings, and transformers
play a significant role, whereas in tabular data, methods like PCA, correlation
analysis, and decision tree-based feature selection are preferred.
"""
# Compress the prompt
compressed_prompt = compressor.compress(long_prompt, method="auto", max_tokens=100)
print("Compressed Prompt:", compressed_prompt)
# Use Nadir to select the best model for the compressed prompt
nadir = AutoSelector()
selected_model = nadir.select_model(compressed_prompt)
print("Selected Model:", selected_model.name)
from nadir.compression import GeminiCompressor
from nadir.complexity.llm import LLMComplexityAnalyzer
from nadir.llm_selector.selector.auto import AutoSelector
# Initialize complexity analyzer and compressor
complexity_analyzer = LLMComplexityAnalyzer()
compressor = GeminiCompressor()
# A long, complex prompt
long_prompt = """
Deep learning models often suffer from overfitting when trained on small datasets.
To combat this, techniques such as dropout, batch normalization, and L2 regularization
are widely used. Furthermore, transfer learning from pre-trained models has become
a popular method for reducing the need for large labeled datasets.
"""
# Step 1: Compress the prompt
compressed_prompt = compressor.compress(long_prompt, method="auto", max_tokens=80)
print("Compressed Prompt:", compressed_prompt)
# Step 2: Analyze complexity
complexity_details = complexity_analyzer.get_complexity_details(compressed_prompt)
print("Complexity Details:", complexity_details)
# Step 3: Select the best model
nadir = AutoSelector(complexity_analyzer=complexity_analyzer)
selected_model = nadir.select_model(compressed_prompt, complexity_details)
print("Selected Model:", selected_model.name)
Uses LLMComplexityAnalyzer
to evaluate token usage, linguistic difficulty, and structural complexity.
Assigns a complexity score (0-100).
Compares complexity score with pre-configured LLM models. Chooses the best trade-off between cost, accuracy, and speed.
Compresses long prompts when necessary. Calls the selected model and tracks token usage & cost.
git clone https://github.com/your-username/Nadir.git
cd Nadir
git checkout -b feature-improvement
pytest tests/
git add .
git commit -m "Added a new complexity metric"
git push origin feature-improvement
Open a PR on GitHub 🚀
Join the conversation and get support in our Discord Community.
💬 Have questions or suggestions? Create an Issue or Start a Discussion on GitHub.
🔥 Happy coding with Nadir! 🚀