You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+135-4
Original file line number
Diff line number
Diff line change
@@ -558,7 +558,7 @@ const res = await gen.forward({ text });
558
558
}
559
559
```
560
560
561
-
## Tuning the prompts (programs)
561
+
## Tuning the prompts (Basic)
562
562
563
563
You can tune your prompts using a larger model to help them run more efficiently and give you better results. This is done by using an optimizer like `AxBootstrapFewShot` with and examples from the popular `HotPotQA` dataset. The optimizer generates demonstrations `demos` which when used with the prompt help improve its efficiency.
564
564
@@ -628,6 +628,137 @@ const res = await program.forward({
628
628
console.log(res);
629
629
```
630
630
631
+
## Tuning the prompts (Advanced, Mipro v2)
632
+
633
+
MiPRO v2 is an advanced prompt optimization framework that uses Bayesian optimization to automatically find the best instructions, demonstrations, and examples for your LLM programs. By systematically exploring different prompt configurations, MiPRO v2 helps maximize model performance without manual tuning.
634
+
635
+
### Key Features
636
+
637
+
-**Instruction optimization**: Automatically generates and tests multiple instruction candidates
638
+
-**Few-shot example selection**: Finds optimal demonstrations from your dataset
3. Selects labeled examples directly from your dataset
757
+
4. Uses Bayesian optimization to find the optimal combination
758
+
5. Applies the best configuration to your program
759
+
760
+
By exploring the space of possible prompt configurations and systematically measuring performance, MiPRO v2 delivers optimized prompts that maximize your model's effectiveness.
761
+
631
762
## Built-in Functions
632
763
633
764
| Function | Name | Description |
@@ -658,8 +789,6 @@ OPENAI_APIKEY=openai_key npm run tsx ./src/examples/marketing.ts
658
789
| rag-docs.ts | Convert PDF to text and embed for rag search |
659
790
| react.ts | Use function calling and reasoning to answer questions |
660
791
| agent.ts | Agent framework, agents can use other agents, tools etc |
661
-
| qna-tune.ts | Use an optimizer to improve prompt efficiency |
662
-
| qna-use-tuned.ts | Use the optimized tuned prompts |
663
792
| streaming1.ts | Output fields validation while streaming |
664
793
| streaming2.ts | Per output field validation while streaming |
665
794
| streaming3.ts | End-to-end streaming example `streamingForward()`|
@@ -671,7 +800,9 @@ OPENAI_APIKEY=openai_key npm run tsx ./src/examples/marketing.ts
671
800
| simple-classify.ts | Use a simple classifier to classify stuff |
672
801
| mcp-client-memory.ts | Example of using an MCP server for memory with Ax |
673
802
| mcp-client-blender.ts | Example of using an MCP server for Blender with Ax |
674
-
803
+
| tune-bootstrap.ts | Use bootstrap optimizer to improve prompt efficiency |
804
+
| tune-mipro.ts | Use mipro v2 optimizer to improve prompt efficiency |
805
+
| tune-usage.ts | Use the optimized tuned prompts |
0 commit comments