Skip to content

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.

License

Notifications You must be signed in to change notification settings

andrewkwok-shanghaitech/ragflow

 
 

Repository files navigation

English | 简体中文 | 日本語

Latest Release Static Badge docker pull infiniflow/ragflow:v0.5.0 license

💡 What is RAGFlow?

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.

🌟 Key Features

🍭 "Quality in, quality out"

  • Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
  • Finds "needle in a data haystack" of literally unlimited tokens.

🍱 Template-based chunking

  • Intelligent and explainable.
  • Plenty of template options to choose from.

🌱 Grounded citations with reduced hallucinations

  • Visualization of text chunking to allow human intervention.
  • Quick view of the key references and traceable citations to support grounded answers.

🍔 Compatibility with heterogeneous data sources

  • Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.

🛀 Automated and effortless RAG workflow

  • Streamlined RAG orchestration catered to both personal and large businesses.
  • Configurable LLMs as well as embedding models.
  • Multiple recall paired with fused re-ranking.
  • Intuitive APIs for seamless integration with business.

📌 Latest Features

  • 2024-05-08 Integrates LLM DeepSeek-V2.
  • 2024-04-26 Adds file management.
  • 2024-04-19 Supports conversation API (detail).
  • 2024-04-16 Integrates an embedding model 'bce-embedding-base_v1' from BCEmbedding, and FastEmbed, which is designed specifically for light and speedy embedding.
  • 2024-04-11 Supports Xinference for local LLM deployment.
  • 2024-04-10 Adds a new layout recognition model for analyzing legal documents.
  • 2024-04-08 Supports Ollama for local LLM deployment.
  • 2024-04-07 Supports Chinese UI.

🔎 System Architecture

🎬 Get Started

📝 Prerequisites

  • CPU >= 4 cores
  • RAM >= 16 GB
  • Disk >= 50 GB
  • Docker >= 24.0.0 & Docker Compose >= v2.26.1

    If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

🚀 Start up the server

  1. Ensure vm.max_map_count >= 262144 (more):

    To check the value of vm.max_map_count:

    $ sysctl vm.max_map_count

    Reset vm.max_map_count to a value at least 262144 if it is not.

    # In this case, we set it to 262144:
    $ sudo sysctl -w vm.max_map_count=262144

    This change will be reset after a system reboot. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:

    vm.max_map_count=262144
  2. Clone the repo:

    $ git clone https://github.com/infiniflow/ragflow.git
  3. Build the pre-built Docker images and start up the server:

    Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update RAGFLOW_VERSION in docker/.env to the intended version, for example RAGFLOW_VERSION=v0.5.0, before running the following commands.

    $ cd ragflow/docker
    $ chmod +x ./entrypoint.sh
    $ docker compose up -d

    The core image is about 9 GB in size and may take a while to load.

  4. Check the server status after having the server up and running:

    $ docker logs -f ragflow-server

    The following output confirms a successful launch of the system:

        ____                 ______ __
       / __ \ ____ _ ____ _ / ____// /____  _      __
      / /_/ // __ `// __ `// /_   / // __ \| | /| / /
     / _, _// /_/ // /_/ // __/  / // /_/ /| |/ |/ /
    /_/ |_| \__,_/ \__, //_/    /_/ \____/ |__/|__/
                  /____/
    
     * Running on all addresses (0.0.0.0)
     * Running on http://127.0.0.1:9380
     * Running on http://x.x.x.x:9380
     INFO:werkzeug:Press CTRL+C to quit

    If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network anomaly error because, at that moment, your RAGFlow may not be fully initialized.

  5. In your web browser, enter the IP address of your server and log in to RAGFlow.

    With default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

  6. In service_conf.yaml, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

    See ./docs/llm_api_key_setup.md for more information.

    The show is now on!

🔧 Configurations

When it comes to system configurations, you will need to manage the following files:

You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.

The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.

To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.

Updates to all system configurations require a system reboot to take effect:

$ docker-compose up -d

🛠️ Build from source

To build the Docker images from source:

$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d

🛠️ Launch Service from Source

To launch the service from source, please follow these steps:

  1. Clone the repository
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
  1. Create a virtual environment (ensure Anaconda or Miniconda is installed)
$ conda create -n ragflow python=3.11.0
$ conda activate ragflow
$ pip install -r requirements.txt

If CUDA version is greater than 12.0, execute the following additional commands:

$ pip uninstall -y onnxruntime-gpu
$ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
  1. Copy the entry script and configure environment variables
$ cp docker/entrypoint.sh .
$ vi entrypoint.sh

Use the following commands to obtain the Python path and the ragflow project path:

$ which python
$ pwd

Set the output of which python as the value for PY and the output of pwd as the value for PYTHONPATH.

If LD_LIBRARY_PATH is already configured, it can be commented out.

# Adjust configurations according to your actual situation; the two export commands are newly added.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# Optional: Add Hugging Face mirror
export HF_ENDPOINT=https://hf-mirror.com
  1. Start the base services
$ cd docker
$ docker compose -f docker-compose-base.yml up -d 
  1. Check the configuration files Ensure that the settings in docker/.env match those in conf/service_conf.yaml. The IP addresses and ports for related services in service_conf.yaml should be changed to the local machine IP and ports exposed by the container.

  2. Launch the service

$ chmod +x ./entrypoint.sh
$ bash ./entrypoint.sh
  1. Start the WebUI service
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ vim .umirc.ts
# Modify proxy.target to 127.0.0.1:9380
$ npm run dev 
  1. Deploy the WebUI service
$ cd web
$ npm install --registry=https://registry.npmmirror.com --force
$ umi build
$ mkdir -p /ragflow/web
$ cp -r dist /ragflow/web
$ apt install nginx -y
$ cp ../docker/nginx/proxy.conf /etc/nginx
$ cp ../docker/nginx/nginx.conf /etc/nginx
$ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d
$ systemctl start nginx

📚 Documentation

📜 Roadmap

See the RAGFlow Roadmap 2024

🏄 Community

🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.

About

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 56.9%
  • TypeScript 40.3%
  • Less 2.5%
  • Other 0.3%