Skip to content

πŸ¦œοΈπŸ”— This is a very simple re-implementation of LangChain, in ~100 lines of code

Notifications You must be signed in to change notification settings

ULL-prompt-engineering/langchain-mini

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ¦œοΈπŸ”— LangChain-mini

This is a very simple re-implementation of LangChain, in ~100 lines of code. In essence, it is an LLM (GPT-3.5) powered chat application that is able to use tools (Google search and a calculator) in order to hold conversations and answer questions.

This is not intended to be a replacement for LangChain, instead it was built for fun and educational purposes. If you're interested in how LangChain, and similar tools work, this is a good starting point.

For more information about this project, read

  1. Re-implementing LangChain in 100 lines of code by Colin Eberhardt (May 2023). This repo is a fork of langchain-mini
  2. Prompt Engineering 101 tutorial. Ask the author for access.

Disclaimer

This Course (the "Course") includes content generated by AI systems and is provided for educational purposes only.

The information and recommendations provided in the Course by AI systems are based on algorithms, data analysis, and/or large language models. ALL AI GENERATED CONTENT SHOULD BE REVIEWED CAREFULLY. It is the user's responsibility to independently verify the AI generated content and exercise their own discretion and critical thinking in regard to using such information. While efforts have been made to ensure the accuracy and reliability of the AI generated content, the organizers MAKE NO REPRESENTATIONS AS TO THE ACCURACY, COMPLETENESS, VALIDITY, OR SUITABILITY OF ANY AI GENERATED INFORMATION IN THIS COURSE AND WILL NOT BE LIABLE FOR ANY ERRORS, OMISSIONS, OR DELAYS IN THIS INFORMATION OR ANY INJURIES, LOSS, OR DAMAGES ARISING FROM ITS USE. All information is provided on an as-is basis. Nothing contained in the Course or the Course materials constitutes a solicitation, recommendation, or endorsement by the organizers of a particular AI model or AI generated content.

Many parts of this course utilize ChatGPT's API, and if you exceed the free tier and you want to continue using the API, an option is to provide a credit card to OpenAI to pay them for the use of their API.

Setup

Install dependencies, and run (with node >= v18):

➜  langchain-mini git:(main) nvm use v20
Now using node v20.5.0 (npm v9.8.0)

Install dependencies:

% npm install

You'll need to have both an OpenAI and SerpApi keys. These can be supplied to the application via a .env file:

OPENAI_API_KEY="..."
SERPAPI_API_KEY="..."

How the Google Search API works

See the notes at /serpapi/README.md for details.

The Calculator tool

The most important thing about the calculator is that it has to provide an appropiate anwser to the LLM if there were errors. If there are errors, the returned string ask the LLM Please reformulate the expression. The calculator tool has failed with error:\n'${e}':

import { Parser } from "expr-eval";
const calculator = (input) => {
  try {
    let answer = Parser.evaluate(input).toString()
    console.log(blue(`Calculator answer: ${answer}\n***********`));
    return answer;
  } catch (e) {    
    console.log(blue(`Calculator got errors: ${e}\n***********`));
    return `Please reformulate the expression. The calculator tool has failed with error:\n'${e}'`;
  }
}

The prompt template

The assets/templates/prompt.txt file is a template from which we will build the instructions for the LLM on each step of the chat. The LLM answers to the previous questions are appended to the template as Thoughts. The result of the call to the tools will be added to the template as Observations.

Answer the following questions as best you can. You have access to the following tools:

{{tools}}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{{toolnames}}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {{question}}
Thought:

Notice the Thought: field at the end. This is where the program will append the LLM response.

The template is first filled inside the function answerQuestion with the information of the question and the available tools:

const promptTemplate = fs.readFileSync("assets/templates/prompt.txt", "utf8");
/* ... */
let prompt = render(promptTemplate, 
    {
      question, 
      tools: Object.keys(tools).map(toolname => `${toolname}: ${tools[toolname].description}`).
             join("\n"), 
      toolnames: Object.keys(tools).join(",")
    });

The Reason-Action (ReAct) loop

Then we want to iteratively:

  1. ask the LLM to give us a thought, that is to decide which tool to use (or no tool at all)

     const response = await completePrompt(prompt);
     prompt += response;

    Since the prompt ends with the Thought field, the LLM response is added as a Tought. The LLM also has filled the Action: and Action Input: entries. We also need to determine the values for the action and the actionInput fields from the LLM response using regular expressions:

     const action = response.match(/Action: (.*)/)?.[1];
     if (action) {
       // execute the action specified by the LLMs
       const actionInput = response.match(/Action Input: "?(.*)"?/)?.[1];
       ...
     }
  2. execute the corresponding tool (calculator or search) based on the given action, supplying them with the Action Input,

const result = await tools[action.trim().toLowerCase()].execute(actionInput);
  1. appending the results of the tool to the prompt as an Observation:

        prompt += `Observation: ${result}\n`;
  2. This process continues until the LLM orchestrator determines that it has enough information and returns a Final Answer.

     const action = response.match(/Action: (.*)/)?.[1];
     if (action) {
       ...
     } else {
       return response.match(/Final Answer: (.*)/)?.[1]; // The text after the colon
     } 

OpenAI Assistant API

In November 2023 OpenAI released an API called OpenAI Assistants that allows you to build assistants that can perform tasks in the real world. The Assistant API and langchain are functionally similar. An advantage of the Assistant API is that memory and context window are automatically managed where in langchain - at this time - you have explicitly set those things up.

Tracing the Agent model "How many five year periods are in the current year? Be accurate!"

See /docs/how-many-five-years.md

Tracing "What was the highest temperature (in Celsius) in Santa Cruz de Tenerife yesterday?"

See /docs/highest-temperature.md

Tracing a Chat about "the current president of Poland"

See /docs/president-chat.md

Tracing a chat about rubiks cube

See /docs/rubiks-cube.md

References

About

πŸ¦œοΈπŸ”— This is a very simple re-implementation of LangChain, in ~100 lines of code

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • JavaScript 100.0%