Skip to content

Latest commit

 

History

History

llama-index-express

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Overview

This is a LlamaIndex project bootstrapped with create-llama and adapted to include OpenInference instrumentation for OpenAI calls.

Our example will export spans data simultaneously on Console and arize-phoenix, however you can run your code anywhere and can use any exporter that OpenTelemetry supports.

Getting Started With Local Development

First, startup the backend as described in the backend README.

Second, run the development server of the frontend as described in the frontend README.

Open http://localhost:3000 with your browser to see the result.

Getting Started With Docker-Compose

Copy the .env.example file to .env and set your OPENAI_API_KEY.

Ensure that Docker is installed and running. Run the command docker compose up to spin up services for the frontend, backend, and Phoenix. Once those services are running, open http://localhost:3000 to use the chat interface and http://localhost:6006 to view the Phoenix UI. When you're finished, run docker compose down to spin down the services.

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!