Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: doc update for ten-agent playground #415

Merged
merged 8 commits into from
Dec 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,14 @@

* [Overview](ten_agent/overview.md)
* [Getting Started](ten_agent/getting_started.md)
* [Run Demo](ten_agent/demo/quickstart.md)
* [Run Playground](ten_agent/playground/quickstart.md)
* [Configure Modules](ten_agent/playground/configure_modules.md)
* [Configure Properties](ten_agent/playground/configure_properties.md)
* [Run Voice Assistant](ten_agent/playground/run_va.md)
* [Run Voice Assistant with Realtime API](ten_agent/playground/run_va_realtime.md)
* [Run Dify Chat Bot](ten_agent/playground/run_dify.md)
* [Run Coze Chat Bot](ten_agent/playground/run_coze.md)
* [Customize TEN Agent](ten_agent/customize_your_agent.md)
* [Create a Hello World Extension](ten_agent/create_a_hello_world_extension.md)
* [Setup VSCode Inside Container](ten_agent/setting_up_vscode_for_development_inside_container.md)
Expand Down
33 changes: 33 additions & 0 deletions docs/ten_agent/playground/run_coze.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# How to make Coze Chat Bot Speak

In this tutorial, we will show you how to make Coze Bot speak in TEN-Agent playground.

## Prerequisites

- Make sure you have the TEN-Agent playground running. If not, follow the [Run Playground](https://doc.theten.ai/ten-agent/quickstart) guide to start the playground.
- You will need following information prepared:
- Coze info:
- Coze Bot ID
- Coze Token
- Coze Base URL (only needed if you need to access non-global environment)
- STT info, any supported STT can be used. [Deepgram](https://deepgram.com/) is relatively easy to register and get started with.
- TTS info, any supported TTS can be used. [Fish.Audio](https://fish.audio/) is relatively easy to register and get started with.
- RTC info, currently only Agora RTC is supported. You can register your account at [Agora](https://www.agora.io/). We assume you have your App ID and App Certificate ready when you configure your `.env` file.

## Steps

1. Open the playground at [localhost:3000](http://localhost:3000) to configure your agent.
2. Select the graph type `voice_assistant`.
3. Click on `Module Picker` to open the module selection.
4. If you preferred STT/TTS module is not by default selected, you can select the module from the dropdown list. Note you will need to configure the module with the correct information like API key, etc.
5. From `LLM` module, click on the dropdown and select `Coze Chat Bot`.
6. Click on `Save Change` to apply the module to the graph.
7. Click on the Button to the right of the graph selection to open the property configuration. You will see a list of properties that can be configured for the `Coze Chat Bot` module.
8. Configure the `Coze Bot ID`, `Coze Token`, and `Coze Base URL` properties with the information you prepared.
9. Click on `Save Change` to apply the property to the `Coze Chat Bot` module.
10. If you see the success toast, the property is successfully applied to the `Coze Chat Bot` module.
11. You are all set! Now you can start speaking to Coze Bot by clicking on the `Connect` button. Note you will need to wait a few seconds for agent to initialzie itself.

## Using Azure STT

Azure STT is integrated within RTC extension module. That's why if you want to use Azure STT, you will need to select `voice_assistant_integrated_stt` graph type.
42 changes: 42 additions & 0 deletions docs/ten_agent/playground/run_dify.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# How to make Dify Chat Bot Speak

In this tutorial, we will show you how to make Dify Bot speak in TEN-Agent playground.

## Prerequisites

- Make sure you have the TEN-Agent playground running. If not, follow the [Run Playground](https://doc.theten.ai/ten-agent/quickstart) guide to start the playground.
- You will need following information prepared:
- Dify info:
- Dify API Key
- STT info, any supported STT can be used. [Deepgram](https://deepgram.com/) is relatively easy to register and get started with.
- TTS info, any supported TTS can be used. [Fish.Audio](https://fish.audio/) is relatively easy to register and get started with.
- RTC info, currently only Agora RTC is supported. You can register your account at [Agora](https://www.agora.io/). We assume you have your App ID and App Certificate ready when you configure your `.env` file.

> You can use any Agent / Chat Assistant defined in Dify platform. Each Agent / Chat Assistant has its own API Key.

## Steps

1. Open the playground at [localhost:3000](http://localhost:3000) to configure your agent.
2. Select the graph type `voice_assistant`.
3. Click on `Module Picker` to open the module selection.
4. If you preferred STT/TTS module is not by default selected, you can select the module from the dropdown list. Note you will need to configure the module with the correct information like API key, etc.
5. From `LLM` module, click on the dropdown and select `Dify Chat Bot`.
6. Click on `Save Change` to apply the module to the graph.
7. Click on the Button to the right of the graph selection to open the property configuration. You will see a list of properties that can be configured for the `Dify Chat Bot` module.
8. Configure the `Dify API Key` property with the information you prepared.
9. Click on `Save Change` to apply the property to the `Dify Chat Bot` module.
10. If you see the success toast, the property is successfully applied to the `Dify Chat Bot` module.
11. You are all set! Now you can start speaking to Dify Bot by clicking on the `Connect` button. Note you will need to wait a few seconds for agent to initialzie itself.

## Using Azure STT

Azure STT is integrated within RTC extension module. That's why if you want to use Azure STT, you will need to select `voice_assistant_integrated_stt` graph type.

## Troubleshooting

If you encounter any issues, please check the following:

- Make sure you have the correct API Key for Dify Chat Bot.
- Make sure in Dify your Chat Assistant has a valid model key configured.
- Make sure you have the correct STT and TTS modules selected and configured.
- Make sure you have the correct graph type selected.
42 changes: 42 additions & 0 deletions docs/ten_agent/playground/run_va.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Run Voice Assistant with Large Language Model

This guide will help you to run the Voice Assistant with Large Language Model in the TEN-Agent Playground.

## Prerequisites

- Make sure you have the TEN-Agent playground running. If not, follow the [Run Playground](https://doc.theten.ai/ten-agent/quickstart) guide to start the playground.
- You will need following information prepared:
- STT info, any supported STT can be used. [Deepgram](https://deepgram.com/) is relatively easy to register and get started with.
- TTS info, any supported TTS can be used. [Fish.Audio](https://fish.audio/) is relatively easy to register and get started with.
- LLM info, any supported LLM can be used. It's recommended to use [OpenAI](https://openai.com)
- RTC info, currently only Agora RTC is supported. You can register your account at [Agora](https://www.agora.io/). We assume you have your App ID and App Certificate ready when you configure your `.env` file.

## Steps

1. Open the playground at [localhost:3000](http://localhost:3000) to configure your agent.
2. Select the graph type `voice_assistant`.
3. Click on `Module Picker` to open the module selection.
4. If you preferred STT/TTS module is not by default selected, you can select the module from the dropdown list. Note you will need to configure the module with the correct information like API key, etc.
5. From `LLM` module, click on the dropdown and select your preferred Large Language Model.
6. Click on `Save Change` to apply the module to the graph.
7. Click on the Button to the right of the graph selection to open the property configuration. You will see a list of properties that can be configured for the selected Large Language Model.
8. Configure the properties with the information you prepared.
9. Click on `Save Change` to apply the properties to the Large Language Model.
10. If you see the success toast, the properties are successfully applied to the Large Language Model.
11. You are all set! Now you can start speaking to the Voice Assistant by clicking on the `Connect` button. Note you will need to wait a few seconds for agent to initialzie itself.

## Using Azure STT

Azure STT is integrated within RTC extension module. That's why if you want to use Azure STT, you will need to select `voice_assistant_integrated_stt` graph type.

## Bind Weather Tool to your LLM

You can bind weather tool to your LLM module in the TEN-Agent Playground.
It's recommended to use OpenAI LLM below.

1. When you have your agent running. Open Module Picker.
2. Click on the button to the right of the LLM module to open the tool selection.
3. Select `Weather Tool` from the popover list.
4. Click on `Save Change` to apply the tool to the LLM module.
5. If you see the success toast, the tool is successfully applied to the LLM module.
6. You are all set! Now you can ask the agent about the weather by speaking to it.
34 changes: 34 additions & 0 deletions docs/ten_agent/playground/run_va_realtime.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Run Voice Assistant with Voice to Voice Realtime API

This guide will help you to run the Voice Assistant with Voice to Voice Realtime API in the TEN-Agent Playground.

## Prerequisites

- Make sure you have the TEN-Agent playground running. If not, follow the [Run Playground](https://doc.theten.ai/ten-agent/quickstart) guide to start the playground.
- You will need following information prepared:
- Realtime API Key
- RTC info, currently only Agora RTC is supported. You can register your account at [Agora](https://www.agora.io/). We assume you have your App ID and App Certificate ready when you configure your `.env` file.

## Steps

1. Open the playground at [localhost:3000](http://localhost:3000) to configure your agent.
2. Select the graph type `voice_assistant_realtime`.
3. Click on `Module Picker` to open the module selection.
4. Select your preferred V2V module from the dropdown list.
5. Click on `Save Change` to apply the module to the graph.
6. Click on the Button to the right of the graph selection to open the property configuration. You will see a list of properties that can be configured for the selected V2V module.
7. Configure the `Realtime API Key` property with the information you prepared.
8. Click on `Save Change` to apply the property to the V2V module.
9. If you see the success toast, the property is successfully applied to the V2V module.
10. You are all set! Now you can start speaking to the Voice Assistant by clicking on the `Connect` button. Note you will need to wait a few seconds for agent to initialzie itself.

## Bind Weather Tool to your V2V

You can bind weather tool to your V2V module in the TEN-Agent Playground.

1. When you have your agent running. Open Module Picker.
2. Click on the button to the right of the V2V module to open the tool selection.
3. Select `Weather Tool` from the popover list.
4. Click on `Save Change` to apply the tool to the V2V module.
5. If you see the success toast, the tool is successfully applied to the V2V module.
6. You are all set! Now you can ask the agent about the weather by speaking to it.
Loading