You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
run: poetry run pytest tests/ -k 'not llm and not openai and not gemini and not anthropic and not cohere and not vertexai' && poetry run pytest tests/llm/test_cohere
28
+
run: uv run pytest tests/ -k 'not llm and not openai and not gemini and not anthropic and not cohere and not vertexai'
Copy file name to clipboardExpand all lines: README.md
+11-2
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Instructor, The Most Popular Library for Simple Structured Outputs
2
2
3
-
Instructor is the most popular Python library for working with structured outputs from large language models (LLMs), boasting over 600,000 monthly downloads. Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows with the community's top choice!
3
+
Instructor is the most popular Python library for working with structured outputs from large language models (LLMs), boasting over 1 million monthly downloads. Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows with the community's top choice!
1. A pre-execution hook that logs all kwargs passed to the function.
138
139
2. An exception hook that logs any exceptions that occur during execution.
139
140
@@ -513,6 +514,14 @@ We invite you to contribute to evals in `pytest` as a way to monitor the quality
513
514
514
515
If you want to help, checkout some of the issues marked as `good-first-issue` or `help-wanted` found [here](https://github.com/jxnl/instructor/labels/good%20first%20issue). They could be anything from code improvements, a guest blog post, or a new cookbook.
515
516
517
+
Here's a quick list of commands that you can run to get started. We're using `uv` to manage our dependencies so make sure you have that installed.
518
+
519
+
1.`uv sync --all-extras --group <dependency groups you'd like to install>`: This should install all the dependencies for the project using `uv`, make sure to install the specific dependencies that you'd like to install
520
+
521
+
2.`uv run pytest` : This runs the tests in `pytest`. If you're pushing up a new PR, make sure that you've written some tests and that they're passing locally for you
522
+
523
+
We use `ruff` and `pyright` for linting and type checking so make sure those are passing when you push up a PR. You can check pyright by running `uv run pyright` and ruff with `uv run ruff check` locally.
524
+
516
525
## CLI
517
526
518
527
We also provide some added CLI functionality for easy convenience:
Copy file name to clipboardExpand all lines: docs/concepts/prompt_caching.md
+35-27
Original file line number
Diff line number
Diff line change
@@ -17,23 +17,16 @@ This optimization is especially useful for applications making multiple API call
17
17
18
18
Prompt Caching is enabled for the following models:
19
19
20
-
* gpt-4o
21
-
* gpt-4o-mini
22
-
* o1-preview
23
-
* o1-mini
20
+
- gpt-4o
21
+
- gpt-4o-mini
22
+
- o1-preview
23
+
- o1-mini
24
24
25
25
Caching is based on prefix matching, so if you're using a system prompt that contains a common set of instructions, you're likely to see a cache hit as long as you move all variable parts of the prompt to the end of the message when possible.
26
26
27
-
28
27
## Prompt Caching in Anthropic
29
28
30
-
The `anthropic.beta.prompt_caching.messages.create` method enables you to:
31
-
32
-
1. Cache specific prompt portions
33
-
2. Reuse cached content in subsequent calls
34
-
3. Reduce processed data per request
35
-
36
-
By implementing prompt caching, you can potentially enhance efficiency and reduce costs, especially when dealing with large, shared contexts across multiple API interactions.
29
+
Prompt Caching is now generally avaliable for Anthropic. This enables you to cache specific prompt portions, reuse cached content in subsequent calls, and reduce processed data per request.
37
30
38
31
??? note "Source Text"
39
32
@@ -182,18 +175,11 @@ By implementing prompt caching, you can potentially enhance efficiency and reduc
1. Since the feature is still in beta, we need to manually pass in the function that we're looking to patch.
215
+
print(completion)
216
+
# Message(
217
+
# id='msg_01QcqjktYc1PXL8nk7y5hkMV',
218
+
# content=[
219
+
# ToolUseBlock(
220
+
# id='toolu_019wABRzQxtSbXeuuRwvJo15',
221
+
# input={
222
+
# 'name': 'Jane Austen',
223
+
# 'description': 'A renowned English novelist of the early 19th century, known for her wit, humor, and keen observations of human nature. She is the author of
224
+
# several classic novels including "Pride and Prejudice," "Emma," "Sense and Sensibility," and "Mansfield Park." Austen\'s writing is characterized by its subtlety, delicate touch,
225
+
# and ability to create memorable characters. Her work often involves social commentary and explores themes of love, marriage, and societal expectations in Regency-era England.'
), "Client must be an instance of {anthropic.Anthropic, anthropic.AsyncAnthropic, anthropic.AnthropicBedrock, anthropic.AsyncAnthropicBedrock, anthropic.AnthropicVertex, anthropic.AsyncAnthropicVertex}"
0 commit comments