Skip to content

Latest commit

 

History

History
14 lines (11 loc) · 597 Bytes

README.md

File metadata and controls

14 lines (11 loc) · 597 Bytes

Train of Thought

By using two agents to break down a prompt, generate a chain of thoughts, and verify subproblem outputs, we saw accuracy and safety improvements over a one-shot approach. This was inspired by OpenAI's recent o1 model as well as research in the field of multi-agent LLMs. Currently, the project uses two instances of Google's Gemini Flash 1.5 model, but this approach can be generalized to any number and type of models.

Note: Removed Agent Backtracking due to token limits.

Learn more at our presentation.