Note
This toolkit supports Anthropic's newest Claude Sonnet 3.5 model (as of October 22, 2024)
The Claude AI Toolkit makes it easy to use Anthropic's latest Claude 3.5 'Sonnet' model, along with the Claude 3 'Opus', 'Sonnet' and 'Haiku' language models for creating chatbots, generating text, and analyzing images. It's designed for everyone, from beginners to experienced developers, allowing quick addition of AI features to projects with simple commands. While it offers simplicity and lightweight integration, it doesn't compromise on power; experienced developers can access the full suite of advanced options available via the API, ensuring robust customization and control. This toolkit is perfect for those looking to efficiently tap into advanced AI without getting bogged down in technical details, yet it still provides the depth needed for complex project requirements.
- Conversational AI: Create interactive, real-time chat experiences (chatbots) or AI assistants.
- Image Captioning: Generate detailed descriptions and insights or create captions from images.
- Text Generation: Produce coherent and contextually relevant text and answers from simple prompts.
- Highly Customizable: Tailor settings like streaming output, system prompts, sampling temperature and more to suit your specific requirements.
- Lightweight Integration: Efficiently designed with minimal dependencies, requiring only the
requests
package for core functionality.
Python 3.x
- An API key from Anthropic
The following Python packages are required:
requests
: For making HTTP requests to the Claude API.
The following Python packages are optional:
python-dotenv
: For managing API keys and other environment variables.
To use the Claude AI Toolkit, clone the repository to your local machine and install the required Python packages.
Clone the repository:
git clone https://github.com/RMNCLDYO/claude-ai-toolkit.git
Navigate to the repositories folder:
cd claude-ai-toolkit
Install the required dependencies:
pip install -r requirements.txt
-
Obtain an API key from Anthropic.
-
You have three options for managing your API key:
Click here to view the API key configuration options
-
Setting it as an environment variable on your device (recommended for everyday use)
- Navigate to your terminal.
- Add your API key like so:
export CLAUDE_API_KEY=your_api_key
This method allows the API key to be loaded automatically when using the wrapper or CLI.
-
Using an .env file (recommended for development):
- Install python-dotenv if you haven't already:
pip install python-dotenv
. - Create a .env file in the project's root directory.
- Add your API key to the .env file like so:
CLAUDE_API_KEY=your_api_key
This method allows the API key to be loaded automatically when using the wrapper or CLI, assuming you have python-dotenv installed and set up correctly.
- Install python-dotenv if you haven't already:
-
Direct Input:
-
If you prefer not to use a
.env
file, you can directly pass your API key as an argument to the CLI or the wrapper functions.CLI
--api_key "your_api_key"
Wrapper
api_key="your_api_key"
This method requires manually inputting your API key each time you initiate an API call, ensuring flexibility for different deployment environments.
-
-
The Claude AI Toolkit can be used in three different modes: Chat
, Text
, and Vision
. Each mode is designed for specific types of interactions with the suite of models.
Chat mode is intended for chatting with an AI model (similar to a chatbot) or building conversational applications.
CLI
python cli.py --chat
Wrapper
from claude import Chat
Chat().run()
An executable version of this example can be found here. (You must move this file to the root folder before running the program.)
Text mode is suitable for generating text content based on a provided prompt.
CLI
python cli.py --text --prompt "Write a haiku about robots."
Wrapper
from claude import Text
Text().run(prompt="Write a haiku about robots.")
An executable version of this example can be found here. (You must move this file to the root folder before running the program.)
Vision mode allows for generating text based on a combination of text prompts and images.
CLI
python cli.py --vision --prompt "Describe this image." --image "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
Wrapper
from claude import Vision
Vision().run(prompt="Describe this image.", image="https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg")
An executable version of this example can be found here. (You must move this file to the root folder before running the program.)
Description | CLI Flags | CLI Usage | Wrapper Usage |
---|---|---|---|
Enable chat mode | -c , --chat |
--chat | See mode usage above. |
Enable text mode | -t , --text |
--text | See mode usage above. |
Enable vision mode | -v , --vision |
--vision | See mode usage above. |
User prompt | -p , --prompt |
--prompt "Write a haiku about robots." | prompt="Write a haiku about robots." |
Image file path or url | -i , --image |
--prompt "Describe this image." --image "image_path_or_url" | prompt="Describe this image.", image="image_path_or_url" |
API key for authentication | -a , --api_key |
--api_key "your_api_key" | api_key="your_api_key" |
Model name | -m , --model |
--model "claude-3-5-sonnet-20241022" | model="claude-3-5-sonnet-20241022" |
Enable streaming mode | -s , --stream |
--stream | stream=True |
System prompt (instructions) | -sp, '--system_prompt |
--system_prompt "You are an advanced AI assistant." | system_prompt="You are an advanced AI assistant." |
Maximum tokens to generate | -mt , --max_tokens |
--max_tokens 1024 | max_tokens=1024 |
Sampling temperature | -tm , --temperature |
--temperature 0.7 | temperature=0.7 |
Nucleus sampling threshold | -tp , --top_p |
--top_p 0.9 | top_p=0.9 |
Top-k sampling threshold | -tk , --top_k |
--top_k 40 | top_k=40 |
Stop sequences for completion | -ss , --stop_sequences |
--stop_sequences ["\n", "."] | stop_sequences=["\n", "."] |
To exit the program at any time, you can type
exit
orquit
. This command works similarly whether you're interacting with the program via the CLI or through the Python wrapper ensuring that you can easily and safely conclude your work with the Claude AI Toolkit without having to resort to interrupt signals or forcibly closing the terminal or command prompt.
Model | Max Tokens |
---|---|
claude-3-5-sonnet-20241022 |
4096 |
claude-3-opus-20240229 |
4096 |
claude-3-haiku-20240307 |
4096 |
For optimal performance, Anthropic recommends resizing your images before uploading if it is likely to exceed size or token limits. If your image's long edge is more than 1568 pixels, or your image is more than ~1600 tokens, it will first be scaled down, preserving aspect ratio, until it is within size limits. If your input image is too large and needs to be resized, it will increase latency of time-to-first-token, without giving you any additional model performance. Very small images under 200 pixels on any given edge may lead to degraded performance.
Contributions are welcome!
Please refer to CONTRIBUTING.md for detailed guidelines on how to contribute to this project.
Encountered a bug? We'd love to hear about it. Please follow these steps to report any issues:
- Check if the issue has already been reported.
- Use the Bug Report template to create a detailed report.
- Submit the report here.
Your report will help us make the project better for everyone.
Got an idea for a new feature? Feel free to suggest it. Here's how:
- Check if the feature has already been suggested or implemented.
- Use the Feature Request template to create a detailed request.
- Submit the request here.
Your suggestions for improvements are always welcome.
Stay up-to-date with the latest changes and improvements in each version:
- CHANGELOG.md provides detailed descriptions of each release.
Your security is important to us. If you discover a security vulnerability, please follow our responsible disclosure guidelines found in SECURITY.md. Please refrain from disclosing any vulnerabilities publicly until said vulnerability has been reported and addressed.
Licensed under the MIT License. See LICENSE for details.