Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganized docs for docusaurus publish #860

Merged
merged 6 commits into from
May 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@
- 2024-05-15 Integrates OpenAI GPT-4o.
- 2024-05-08 Integrates LLM DeepSeek-V2.
- 2024-04-26 Adds file management.
- 2024-04-19 Supports conversation API ([detail](./docs/conversation_api.md)).
- 2024-04-19 Supports conversation API ([detail](./docs/references/api.md)).
- 2024-04-16 Integrates an embedding model 'bce-embedding-base_v1' from [BCEmbedding](https://github.com/netease-youdao/BCEmbedding), and [FastEmbed](https://github.com/qdrant/fastembed), which is designed specifically for light and speedy embedding.
- 2024-04-11 Supports [Xinference](./docs/xinference.md) for local LLM deployment.
- 2024-04-11 Supports [Xinference](./docs/guides/deploy_local_llm.md) for local LLM deployment.
- 2024-04-10 Adds a new layout recognition model for analyzing legal documents.
- 2024-04-08 Supports [Ollama](./docs/ollama.md) for local LLM deployment.
- 2024-04-08 Supports [Ollama](./docs/guides/deploy_local_llm.md) for local LLM deployment.
- 2024-04-07 Supports Chinese UI.

## 🌟 Key Features
Expand Down Expand Up @@ -87,7 +87,7 @@

### 🚀 Start up the server

1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/max_map_count.md)):
1. Ensure `vm.max_map_count` >= 262144 ([more](./docs/guides/max_map_count.md)):

> To check the value of `vm.max_map_count`:
>
Expand Down Expand Up @@ -154,7 +154,7 @@
> With default settings, you only need to enter `http://IP_OF_YOUR_MACHINE` (**sans** port number) as the default HTTP serving port `80` can be omitted when using the default configurations.
6. In [service_conf.yaml](./docker/service_conf.yaml), select the desired LLM factory in `user_default_llm` and update the `API_KEY` field with the corresponding API key.

> See [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) for more information.
> See [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md) for more information.

_The show is now on!_

Expand Down Expand Up @@ -277,7 +277,7 @@ $ systemctl start nginx
## 📚 Documentation

- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)

## 📜 Roadmap

Expand All @@ -290,4 +290,4 @@ See the [RAGFlow Roadmap 2024](https://github.com/infiniflow/ragflow/issues/162)

## 🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) first.
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our [Contribution Guidelines](./docs/references/CONTRIBUTING.md) first.
14 changes: 7 additions & 7 deletions README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@
- 2024-05-15 OpenAI GPT-4oを統合しました。
- 2024-05-08 LLM DeepSeek-V2を統合しました。
- 2024-04-26 「ファイル管理」機能を追加しました。
- 2024-04-19 会話 API をサポートします ([詳細](./docs/conversation_api.md))。
- 2024-04-19 会話 API をサポートします ([詳細](./docs/references/api.md))。
- 2024-04-16 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) から埋め込みモデル「bce-embedding-base_v1」を追加します。
- 2024-04-16 [FastEmbed](https://github.com/qdrant/fastembed) は、軽量かつ高速な埋め込み用に設計されています。
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/xinference.md) をサポートします。
- 2024-04-11 ローカル LLM デプロイメント用に [Xinference](./docs/guides/deploy_local_llm.md) をサポートします。
- 2024-04-10 メソッド「Laws」に新しいレイアウト認識モデルを追加します。
- 2024-04-08 [Ollama](./docs/ollama.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
- 2024-04-08 [Ollama](./docs/guides/deploy_local_llm.md) を使用した大規模モデルのローカライズされたデプロイメントをサポートします。
- 2024-04-07 中国語インターフェースをサポートします。


Expand Down Expand Up @@ -89,7 +89,7 @@

### 🚀 サーバーを起動

1. `vm.max_map_count` >= 262144 であることを確認する【[もっと](./docs/max_map_count.md)】:
1. `vm.max_map_count` >= 262144 であることを確認する【[もっと](./docs/guides/max_map_count.md)】:

> `vm.max_map_count` の値をチェックするには:
>
Expand Down Expand Up @@ -155,7 +155,7 @@
> デフォルトの設定を使用する場合、デフォルトの HTTP サービングポート `80` は省略できるので、与えられたシナリオでは、`http://IP_OF_YOUR_MACHINE`(ポート番号は省略)だけを入力すればよい。
6. [service_conf.yaml](./docker/service_conf.yaml) で、`user_default_llm` で希望の LLM ファクトリを選択し、`API_KEY` フィールドを対応する API キーで更新する。

> 詳しくは [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md) を参照してください。
> 詳しくは [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md) を参照してください。

_これで初期設定完了!ショーの開幕です!_

Expand Down Expand Up @@ -255,7 +255,7 @@ $ bash ./entrypoint.sh
## 📚 ドキュメンテーション

- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)

## 📜 ロードマップ

Expand All @@ -268,4 +268,4 @@ $ bash ./entrypoint.sh

## 🙌 コントリビュート

RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md)をご覧ください。
RAGFlow はオープンソースのコラボレーションによって発展してきました。この精神に基づき、私たちはコミュニティからの多様なコントリビュートを受け入れています。 参加を希望される方は、まず[コントリビューションガイド](./docs/references/CONTRIBUTING.md)をご覧ください。
14 changes: 7 additions & 7 deletions README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@
- 2024-05-15 集成大模型 OpenAI GPT-4o。
- 2024-05-08 集成大模型 DeepSeek。
- 2024-04-26 增添了'文件管理'功能。
- 2024-04-19 支持对话 API ([更多](./docs/conversation_api.md))。
- 2024-04-19 支持对话 API ([更多](./docs/references/api.md))。
- 2024-04-16 集成嵌入模型 [BCEmbedding](https://github.com/netease-youdao/BCEmbedding) 和 专为轻型和高速嵌入而设计的 [FastEmbed](https://github.com/qdrant/fastembed)。
- 2024-04-11 支持用 [Xinference](./docs/xinference.md) 本地化部署大模型。
- 2024-04-11 支持用 [Xinference](./docs/guides/deploy_local_llm.md) 本地化部署大模型。
- 2024-04-10 为‘Laws’版面分析增加了底层模型。
- 2024-04-08 支持用 [Ollama](./docs/ollama.md) 本地化部署大模型。
- 2024-04-08 支持用 [Ollama](./docs/guides/deploy_local_llm.md) 本地化部署大模型。
- 2024-04-07 支持中文界面。

## 🌟 主要功能
Expand Down Expand Up @@ -87,7 +87,7 @@

### 🚀 启动服务器

1. 确保 `vm.max_map_count` 不小于 262144 【[更多](./docs/max_map_count.md)】:
1. 确保 `vm.max_map_count` 不小于 262144 【[更多](./docs/guides/max_map_count.md)】:

> 如需确认 `vm.max_map_count` 的大小:
>
Expand Down Expand Up @@ -153,7 +153,7 @@
> 上面这个例子中,您只需输入 http://IP_OF_YOUR_MACHINE 即可:未改动过配置则无需输入端口(默认的 HTTP 服务端口 80)。
6. 在 [service_conf.yaml](./docker/service_conf.yaml) 文件的 `user_default_llm` 栏配置 LLM factory,并在 `API_KEY` 栏填写和你选择的大模型相对应的 API key。

> 详见 [./docs/llm_api_key_setup.md](./docs/llm_api_key_setup.md)。
> 详见 [./docs/guides/llm_api_key_setup.md](./docs/guides/llm_api_key_setup.md)。

_好戏开始,接着奏乐接着舞!_

Expand Down Expand Up @@ -274,7 +274,7 @@ $ systemctl start nginx
## 📚 技术文档

- [Quickstart](./docs/quickstart.md)
- [FAQ](./docs/faq.md)
- [FAQ](./docs/references/faq.md)

## 📜 路线图

Expand All @@ -287,7 +287,7 @@ $ systemctl start nginx

## 🙌 贡献指南

RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](https://github.com/infiniflow/ragflow/blob/main/docs/CONTRIBUTING.md) 。
RAGFlow 只有通过开源协作才能蓬勃发展。秉持这一精神,我们欢迎来自社区的各种贡献。如果您有意参与其中,请查阅我们的[贡献者指南](./docs/references/CONTRIBUTING.md) 。

## 👥 加入社区

Expand Down
8 changes: 8 additions & 0 deletions docs/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Get Started",
"position": 1,
"link": {
"type": "generated-index",
"description": "RAGFlow Quick Start"
writinwaters marked this conversation as resolved.
Show resolved Hide resolved
}
}
8 changes: 8 additions & 0 deletions docs/guides/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "User Guides",
"position": 2,
"link": {
"type": "generated-index",
"description": "RAGFlow User Guides"
}
}
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
sidebar_position: 1
slug: /configure_knowledge_base
---

# Configure a knowledge base

Knowledge base, hallucination-free chat, and file management are three pillars of RAGFlow. RAGFlow's AI chats are based on knowledge bases. Each of RAGFlow's knowledge bases serves as a knowledge source, *parsing* files uploaded from your local machine and file references generated in **File Management** into the real 'knowledge' for future AI chats. This guide demonstrates some basic usages of the knowledge base feature, covering the following topics:
Expand Down Expand Up @@ -118,7 +123,7 @@ RAGFlow uses multiple recall of both full-text search and vector search in its c

## Search for knowledge base

As of RAGFlow v0.5.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.
As of RAGFlow v0.6.0, the search feature is still in a rudimentary form, supporting only knowledge base search by name.

![search knowledge base](https://github.com/infiniflow/ragflow/assets/93570324/836ae94c-2438-42be-879e-c7ad2a59693e)

Expand Down
75 changes: 75 additions & 0 deletions docs/guides/deploy_local_llm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
sidebar_position: 5
slug: /deploy_local_llm
---

# Deploy a local LLM

RAGFlow supports deploying LLMs locally using Ollama or Xinference.

## Ollama

One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama).

### Install

- [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md)
- [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md)
- [Docker](https://hub.docker.com/r/ollama/ollama)

### Launch Ollama

Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**:
```bash
$ ollama run mistral
```
Or,
```bash
$ docker exec -it ollama ollama run mistral
```

### Use Ollama in RAGFlow

- Go to 'Settings > Model Providers > Models to be added > Ollama'.

![](https://github.com/infiniflow/ragflow/assets/12318111/a9df198a-226d-4f30-b8d7-829f00256d46)

> Base URL: Enter the base URL where the Ollama service is accessible, like, `http://<your-ollama-endpoint-domain>:11434`.

- Use Ollama Models.

![](https://github.com/infiniflow/ragflow/assets/12318111/60ff384e-5013-41ff-a573-9a543d237fd3)

## Xinference

Xorbits Inference([Xinference](https://github.com/xorbitsai/inference)) empowers you to unleash the full potential of cutting-edge AI models.

### Install

- [pip install "xinference[all]"](https://inference.readthedocs.io/en/latest/getting_started/installation.html)
- [Docker](https://inference.readthedocs.io/en/latest/getting_started/using_docker_image.html)

To start a local instance of Xinference, run the following command:
```bash
$ xinference-local --host 0.0.0.0 --port 9997
```
### Launch Xinference

Decide which LLM you want to deploy ([here's a list for supported LLM](https://inference.readthedocs.io/en/latest/models/builtin/)), say, **mistral**.
Execute the following command to launch the model, remember to replace ${quantization} with your chosen quantization method from the options listed above:
```bash
$ xinference launch -u mistral --model-name mistral-v0.1 --size-in-billions 7 --model-format pytorch --quantization ${quantization}
```

### Use Xinference in RAGFlow

- Go to 'Settings > Model Providers > Models to be added > Xinference'.

![](https://github.com/infiniflow/ragflow/assets/12318111/bcbf4d7a-ade6-44c7-ad5f-0a92c8a73789)

> Base URL: Enter the base URL where the Xinference service is accessible, like, `http://<your-xinference-endpoint-domain>:9997/v1`.

- Use Xinference Models.

![](https://github.com/infiniflow/ragflow/assets/12318111/b01fcb6f-47c9-4777-82e0-f1e947ed615a)
![](https://github.com/infiniflow/ragflow/assets/12318111/1763dcd1-044f-438d-badd-9729f5b3a144)
44 changes: 25 additions & 19 deletions docs/llm_api_key_setup.md → docs/guides/llm_api_key_setup.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,25 @@

## Set Before Starting The System

In **user_default_llm** of [service_conf.yaml](./docker/service_conf.yaml), you need to specify LLM factory and your own _API_KEY_.
RagFlow supports the flowing LLM factory, and with more coming in the pipeline:

> [OpenAI](https://platform.openai.com/login?launch), [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
> [ZHIPU-AI](https://open.bigmodel.cn/), [Moonshot](https://platform.moonshot.cn/docs)

After sign in these LLM suppliers, create your own API-Key, they all have a certain amount of free quota.

## After Starting The System

You can also set API-Key in **User Setting** as following:

<div align="center" style="margin-top:20px;margin-bottom:20px;">
<img src="https://github.com/infiniflow/ragflow/assets/12318111/e4e4066c-e964-45ff-bd56-c3fc7fb18bd3" width="1000"/>
</div>

---
sidebar_position: 4
slug: /llm_api_key_setup
---

# Set your LLM API key

You have two ways to input your LLM API key.

## Before Starting The System

In **user_default_llm** of [service_conf.yaml](./docker/service_conf.yaml), you need to specify LLM factory and your own _API_KEY_.
RagFlow supports the flowing LLM factory, and with more coming in the pipeline:

> [OpenAI](https://platform.openai.com/login?launch), [Tongyi-Qianwen](https://dashscope.console.aliyun.com/model),
> [ZHIPU-AI](https://open.bigmodel.cn/), [Moonshot](https://platform.moonshot.cn/docs)

After sign in these LLM suppliers, create your own API-Key, they all have a certain amount of free quota.

## After Starting The System
writinwaters marked this conversation as resolved.
Show resolved Hide resolved

You can also set API-Key in **User Setting** as following:

![](https://github.com/infiniflow/ragflow/assets/12318111/e4e4066c-e964-45ff-bd56-c3fc7fb18bd3)

11 changes: 8 additions & 3 deletions docs/manage_files.md → docs/guides/manage_files.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
sidebar_position: 3
slug: /manage_files
---

# Manage files
JinHai-CN marked this conversation as resolved.
Show resolved Hide resolved

Knowledge base, hallucination-free chat, and file management are three pillars of RAGFlow. RAGFlow's file management allows you to upload files individually or in bulk. You can then link an uploaded file to multiple target knowledge bases. This guide showcases some basic usages of the file management feature.
Expand Down Expand Up @@ -40,11 +45,11 @@ You can link your file to one knowledge base or multiple knowledge bases at one

## Move file to specified folder

As of RAGFlow v0.5.0, this feature is *not* available.
As of RAGFlow v0.6.0, this feature is *not* available.

## Search files or folders

As of RAGFlow v0.5.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).
As of RAGFlow v0.6.0, the search feature is still in a rudimentary form, supporting only file and folder search in the current directory by name (files or folders in the child directory will not be retrieved).

![search file](https://github.com/infiniflow/ragflow/assets/93570324/77ffc2e5-bd80-4ed1-841f-068e664efffe)

Expand Down Expand Up @@ -76,4 +81,4 @@ RAGFlow's file management allows you to download an uploaded file:

![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)

> As of RAGFlow v0.5.0, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.6.0, bulk download is not supported, nor can you download an entire folder.
Loading