- Table of Contents
- Overview
- Why Host Your Own LLM?
- Structure
- Getting Started
- Components
- Usage
- Community
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped environments. This project aims to bring sophisticated AI solutions to air-gapped resource-constrained environments, by enabling the hosting all requisite components of an AI stack.
Our services include vector databases, model backends, API, and UI. These capabilities can be easily accessed and integrated with your existing infrastructure, ensuring the power of AI can be harnessed irrespective of your environment's limitations.
Large Language Models (LLMs) are a powerful resource for AI-driven decision making, content generation, and more. How can LeapfrogAI bring AI to your mission?
-
Data Independence: Sending sensitive information to a third-party service may not be suitable or permissible for all types of data or organizations. By hosting your own LLM, you retain full control over your data.
-
Scalability: Pay-as-you-go AI services can become expensive, especially when large volumes of data are involved and require constant connectivity. Running your own LLM can often be a more cost-effective solution for missions of all sizes.
-
Mission Integration: By hosting your own LLM, you have the ability to customize the model's parameters, training data, and more, tailoring the AI to your specific needs.
The LeapfrogAI repository follows a monorepo structure based around an API with each of the components included in a dedicated packages
directory. Each of these package directories contains the source code for each component as well as the deployment infrastructure. The UDS bundles that handle the development and latest deployments of LeapfrogAI are in the uds-bundles
directory. The structure looks as follows:
leapfrogai/
├── src/
│ ├── leapfrogai_api/ # source code for the API
│ ├── leapfrogai_sdk/ # source code for the SDK
│ └── leapfrogai_ui/ # source code for the UI
├── packages/
│ ├── api/ # deployment infrastructure for the API
│ ├── llama-cpp-python/ # source code & deployment infrastructure for the llama-cpp-python backend
│ ├── repeater/ # source code & deployment infrastructure for the repeater model backend
│ ├── supabase/ # deployment infrastructure for the Supabase backend and postgres database
│ ├── text-embeddings/ # source code & deployment infrastructure for the text-embeddings backend
│ ├── ui/ # deployment infrastructure for the UI
│ ├── vllm/ # source code & deployment infrastructure for the vllm backend
│ └── whisper/ # source code & deployment infrastructure for the whisper backend
├── uds-bundles/
│ ├── dev/ # uds bundles for local uds dev deployments
│ └── latest/ # uds bundles for the most current uds deployments
├── Makefile
├── pyproject.toml
├── README.md
└── ...
The preferred method for running LeapfrogAI is a local Kubernetes deployment using UDS. Refer to the Quick Start section of the LeapfrogAI documentation site for instructions on this type of deployment.
LeapfrogAI provides an API that closely matches that of OpenAI's. This feature allows tools that have been built with OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend.
LeapfrogAI provides several backends for a variety of use cases.
Available Backends:
Backend AMD64 Support ARM64 Support Cuda Support Docker Ready K8s Ready Zarf Ready llama-cpp-python ✅ 🚧 ✅ ✅ ✅ ✅ whisper ✅ 🚧 ✅ ✅ ✅ ✅ text-embeddings ✅ 🚧 ✅ ✅ ✅ ✅ vllm ✅ ❌ ✅ ✅ ✅ ✅
The LeapfrogAI SDK provides a standard set of protobuff and python utilities for implementing backends and gRPC.
LeapfrogAI provides a User Interface with support for common use-cases such as chat, summarization, and transcription.
The repeater "model" is a basic "backend" that parrots all inputs it receives back to the user. It is built out the same way all the actual backends are and it primarily used for testing the API.
GitHub Repo:
LeapfrogAI leverages Chainguard's apko to harden base python images - pinning Python versions to the latest supported version by the other components of the LeapfrogAI stack.
LeapfrogAI can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. See the Quick Start for a list of prerequisite packages that must be installed first.
Prior to deploying any LeapfrogAI packages, a UDS Kubernetes cluster must be deployed using the most recent k3d bundle:
make create-uds-cpu-cluster
This type of deployment pulls the most recent package images and is the most stable way of running a local LeapfrogAI deployment. These instructions can be found on the LeapfrogAI Docs site.
If you want to make some changes to LeapfrogAI before deploying via UDS (for example in a dev environment), follow the UDS Dev Instructions.
Each of the LFAI components can also be run individually outside of a Kubernetes environment via local development. This is useful when testing changes to a specific component, but will not assist in a full deployment of LeapfrogAI. Please refer to the above sections for deployment instructions.
Please refer to the linked READMEs for each individual packages local development instructions:
LeapfrogAI is supported by a community of users and contributors, including:
- Defense Unicorns
- Beast Code
- Chainguard
- Exovera
- Hypergiant
- Pulze
- SOSi
- United States Navy
- United States Air Force
- United States Space Force
Want to add your organization or logo to this list? Open a PR!