- π¦οΈπ LangChain-mini
- Disclaimer
- Setup
- How the Google Search API works
- The Calculator tool
- The prompt template
- The Reason-Action (ReAct) loop
- OpenAI Assistant API
- Tracing the Agent model "How many five year periods are in the current year? Be accurate!"
- Tracing "What was the highest temperature (in Celsius) in Santa Cruz de Tenerife yesterday?"
- Tracing a Chat about "the current president of Poland"
- Tracing a chat about rubiks cube
- References
This is a very simple re-implementation of LangChain, in ~100 lines of code. In essence, it is an LLM (GPT-3.5) powered chat application that is able to use tools (Google search and a calculator) in order to hold conversations and answer questions.
This is not intended to be a replacement for LangChain, instead it was built for fun and educational purposes. If you're interested in how LangChain, and similar tools work, this is a good starting point.
For more information about this project, read
- Re-implementing LangChain in 100 lines of code by Colin Eberhardt (May 2023). This repo is a fork of langchain-mini
- Prompt Engineering 101 tutorial. Ask the author for access.
This Course (the "Course") includes content generated by AI systems and is provided for educational purposes only.
The information and recommendations provided in the Course by AI systems are based on algorithms, data analysis, and/or large language models. ALL AI GENERATED CONTENT SHOULD BE REVIEWED CAREFULLY. It is the user's responsibility to independently verify the AI generated content and exercise their own discretion and critical thinking in regard to using such information. While efforts have been made to ensure the accuracy and reliability of the AI generated content, the organizers MAKE NO REPRESENTATIONS AS TO THE ACCURACY, COMPLETENESS, VALIDITY, OR SUITABILITY OF ANY AI GENERATED INFORMATION IN THIS COURSE AND WILL NOT BE LIABLE FOR ANY ERRORS, OMISSIONS, OR DELAYS IN THIS INFORMATION OR ANY INJURIES, LOSS, OR DAMAGES ARISING FROM ITS USE. All information is provided on an as-is basis. Nothing contained in the Course or the Course materials constitutes a solicitation, recommendation, or endorsement by the organizers of a particular AI model or AI generated content.
Many parts of this course utilize ChatGPT's API, and if you exceed the free tier and you want to continue using the API, an option is to provide a credit card to OpenAI to pay them for the use of their API.
Install dependencies, and run (with node >= v18):
β langchain-mini git:(main) nvm use v20
Now using node v20.5.0 (npm v9.8.0)
Install dependencies:
% npm install
You'll need to have both an OpenAI and SerpApi keys. These can be supplied to the application via a .env
file:
OPENAI_API_KEY="..."
SERPAPI_API_KEY="..."
See the notes at /serpapi/README.md for details.
The most important thing about the calculator is that it has to provide an appropiate anwser to the LLM if there were
errors. If there are errors, the returned string ask the LLM Please reformulate the expression. The calculator tool has failed with error:\n'${e}'
:
import { Parser } from "expr-eval";
const calculator = (input) => {
try {
let answer = Parser.evaluate(input).toString()
console.log(blue(`Calculator answer: ${answer}\n***********`));
return answer;
} catch (e) {
console.log(blue(`Calculator got errors: ${e}\n***********`));
return `Please reformulate the expression. The calculator tool has failed with error:\n'${e}'`;
}
}
The assets/templates/prompt.txt file is a template from which we will build the instructions for the LLM on each step of the
chat. The LLM answers to the previous questions are appended to the template as Thought
s. The result of the call to the tools will be added to the template as Observation
s.
Answer the following questions as best you can. You have access to the following tools:
{{tools}}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{{toolnames}}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {{question}}
Thought:
Notice the Thought:
field at the end. This is where the program will append the LLM response.
The template is first filled inside the function answerQuestion
with the information of the question
and the available tools
:
const promptTemplate = fs.readFileSync("assets/templates/prompt.txt", "utf8");
/* ... */
let prompt = render(promptTemplate,
{
question,
tools: Object.keys(tools).map(toolname => `${toolname}: ${tools[toolname].description}`).
join("\n"),
toolnames: Object.keys(tools).join(",")
});
Then we want to iteratively:
-
ask the LLM to give us a thought, that is to decide which tool to use (or no tool at all)
const response = await completePrompt(prompt); prompt += response;
Since the
prompt
ends with theThought
field, the LLM response is added as aTought
. The LLM also has filled theAction:
andAction Input:
entries. We also need to determine the values for theaction
and theactionInput
fields from the LLM response using regular expressions:const action = response.match(/Action: (.*)/)?.[1]; if (action) { // execute the action specified by the LLMs const actionInput = response.match(/Action Input: "?(.*)"?/)?.[1]; ... }
-
execute the corresponding tool (
calculator
orsearch
) based on the givenaction
, supplying them with theAction Input
,
const result = await tools[action.trim().toLowerCase()].execute(actionInput);
-
appending the results of the tool to the prompt as an
Observation
:prompt += `Observation: ${result}\n`;
-
This process continues until the LLM orchestrator determines that it has enough information and returns a Final Answer.
const action = response.match(/Action: (.*)/)?.[1]; if (action) { ... } else { return response.match(/Final Answer: (.*)/)?.[1]; // The text after the colon }
In November 2023 OpenAI released an API called OpenAI Assistants that allows you to build assistants that can perform tasks in the real world. The Assistant API and langchain are functionally similar. An advantage of the Assistant API is that memory and context window are automatically managed where in langchain - at this time - you have explicitly set those things up.
- Introduction to OpenAI Assistants API with Node.js based on a video of Merwin Praison
- How to upload files using OpenAI Assistants API" based on a video of Ralf Elving
See /docs/how-many-five-years.md
See /docs/highest-temperature.md
- Re-implementing LangChain in 100 lines of code by Colin Eberhardt (May 2023)
- The ReAct model was introduced by Google in ReAct: Synergizing Reasoning and Acting in Language Models (March of 2023)
- ReAct: Synergizing Reasoning and Acting in Language Models
- Brex's Prompt Engineering Guide GitHub repo
- Model ReAct Knowledge Language Amatrian's blog post in "Prompt Engineering 201: Advanced methods and toolkits"
- Prompt Engineering with OpenAI's GPT-3 and other LLMs by James Briggs via YouTube