Skip to content

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

License

Notifications You must be signed in to change notification settings

pathwaycom/llm-app

Repository files navigation

LLM App

Linux macOS chat on Discord follow on X

Pathway's LLM (Large Language Model) App Templates allow you to quickly put in production AI applications which offer high-accuracy RAG and AI enterprise search at scale using the most up-to-date knowledge available in your data sources. You can test them on your own machine and deploy on-cloud (GCP, AWS, Azure, Render,...) or on-premises.

The apps connect and sync (all new data additions, deletions, updates) with data sources on your file system, Google Drive, Sharepoint, S3, Kafka, PostgreSQL, real-time data APIs. They come with no infrastructure dependencies that would need a separate setup. They include built-in data indexing enabling vector search, hybrid search, and full-text search - all done in-memory, with cache.

Application Templates

The application templates provided in this repo scale up to millions of pages of documents. Some of them are optimized for simplicity, some are optimized for amazing accuracy. Pick the one that suits you best. You can use it out of the box, or change some steps of the pipeline - for example, if you would like to add a new data source, or change a Vector Index into a Hybrid Index, it's just a one-line change.

Application (template) Description
Question-Answering RAG App Basic end-to-end RAG app. A question-answering pipeline that uses the GPT model of choice to provide answers to queries to your documents (PDF, DOCX,...) on a live connected data source (files, Google Drive, Sharepoint,...). You can also try out a demo REST endpoint.
Live Document Indexing (Vector Store / Retriever) A real-time document indexing pipeline for RAG that acts as a vector store service. It performs live indexing on your documents (PDF, DOCX,...) from a connected data source (files, Google Drive, Sharepoint,...). It can be used with any frontend, or integrated as a retriever backend for a Langchain or Llamaindex application. You can also try out a demo REST endpoint.
Multimodal RAG pipeline with GPT4o Multimodal RAG using GPT-4o in the parsing stage to index PDFs and other documents from a connected data source files, Google Drive, Sharepoint,...). It is perfect for extracting information from unstructured financial documents in your folders (including charts and tables), updating results as documents change or new ones arrive.
Unstructured-to-SQL pipeline + SQL question-answering A RAG example which connects to unstructured financial data sources (financial report PDFs), structures the data into SQL, and loads it into a PostgreSQL table. It also answers natural language user queries to these financial documents by translating them into SQL using an LLM and executing the query on the PostgreSQL table.
Adaptive RAG App A RAG application using Adaptive RAG, a technique developed by Pathway to reduce token cost in RAG up to 4x while maintaining accuracy.
Private RAG App with Mistral and Ollama A fully private (local) version of the demo-question-answering RAG pipeline using Pathway, Mistral, and Ollama.
Slides AI Search App An indexing pipeline for retrieving slides. It performs multi-modal of PowerPoint and PDF and maintains live index of your slides."

How do these LLM Apps work?

The apps can be run as Docker containers, and expose an HTTP API to connect the frontend. To allow quick testing and demos, some app templates also include an optional Streamlit UI which connects to this API.

The apps rely on the Pathway framework for data source synchronization and for serving API requests (Pathway is a standalone Python library with a Rust engine built into it). They bring you a simple and unified application logic for back-end, embedding, retrieval, LLM tech stack. There is no need to integrate and maintain separate modules for your Gen AI app: Vector Database (e.g. Pinecone/Weaviate/Qdrant) + Cache (e.g. Redis) + API Framework (e.g. Fast API). Pathway's default choice of built-in vector index is based on the lightning-fast usearch library, and hybrid full-text indexes make use of Tantivy library. Everything works out of the box.

Getting started

Each of the App templates in this repo contains a README.md with instructions on how to run it.

You can also find more ready-to-run code templates on the Pathway website.

Some visual highlights

Effortlessly extract and organize table and charts data from PDFs, docs, and more with multimodal RAG - in real-time:

Effortlessly extract and organize table and charts data from PDFs, docs, and more with multimodal RAG - in real-time

(Check out Multimodal RAG pipeline with GPT4o to see the whole pipeline in the works. You may also check out the Unstructured-to-SQL pipeline for a minimal example which works with non-multimodal models as well.)

Automated real-time knowledge mining and alerting:

Automated real-time knowledge mining and alerting

(Check out the Alerting when answers change on Google Drive app example.)

Do-it-Yourself Videos

▶️ An introduction to building LLM apps with Pathway - by Jan Chorowski

▶️ Let's build a real-world LLM app in 11 minutes - by Pau Labarta Bajo

Troubleshooting

To provide feedback or report a bug, please raise an issue on our issue tracker.

Contributing

Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If this is your first contribution to a Github project, here is a Get Started Guide.

If you'd like to make a contribution that needs some more work, just raise your hand on the Pathway Discord server (#get-help) and let us know what you are planning!

Supported and maintained by

Pathway

See Pathway's offering for AI applications

About

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published