-
Notifications
You must be signed in to change notification settings - Fork 2.3k
9. FAQ
If the following error occurs during the deployment process:
Open docker/volumes/elasticsearch/setup_es.sh
in a code editor (like VS Code). In the bottom-right corner of the editor, you will see a CRLF or LF indicator. Click it and select LF. Save the file and restart docker compose --profile '*' up -d
. See also the related issue.
2. "Error response from daemon: Ports are not available: exposing port TCP http://0.0.0.0:2379 -> http://127.0.0.1:0" when deploying locally on Windows?
# Check port usage
netstat -ano | findstr :2379
net stop winnat
net start winnat
You can query the specific error logs with the following command:
docker logs coze-server | grep -i 'node execute failed'
When a text/table/image knowledge base is created successfully, but a file remains stuck at 'Processing' for a long time after being uploaded, as shown below:
Please follow the steps below to check the base component configurations in the docker/.env
file. For configuration details, refer to the documentation: 5. Base Component Configuration.
-
Check the embedding configuration in the .env file.
Model Check Method EMBEDDING_TYPE="openai" 1. OPENAI_EMBEDDING_BASE_URL: coze-server
typically uses SDKs provided by various platform protocols to call APIs. In this case, it is not required to include the/embeddings
suffix. Please configure it according to the specific API provider's documentation.
2. OPENAI_EMBEDDING_BY_AZURE: For non-Microsoft Azure APIs, please set this tofalse
.
3. OPENAI_EMBEDDING_DIMS / OPENAI_EMBEDDING_REQUEST_DIMS:
1. Please set both keys to the same numerical value based on the vector dimensions supported by the model.
2. Some model APIs will report an error if vector dimensions are passed. In this case, you can comment outOPENAI_EMBEDDING_REQUEST_DIMS
.
3. Some models do not support setting vector dimensions. Please configure according to the model's documentation.EMBEDDING_TYPE="ark" 1. ARK_EMBEDDING_MODEL:
1. When using a model name, you need to include the specific version, for example,doubao-embedding-large-text-250515
. When using an endpoint (like ep-xxxx), copy it completely.
2. Currently (as of 0728), image vectorization models (doubao-embedding-vision) are not supported. Please use text vectorization models. Support will be added soon.
2. ARK_EMBEDDING_DIMS: Please refer to the official documentation to configure the supported embedding dimensions.EMBEDDING_TYPE="http" 1. The http protocol currently has its own independent input/output format. You can refer to https://github.com/coze-dev/coze-studio/blob/main/backend/infra/impl/embedding/http/http.go for encapsulation.
2. Ollama is not currently supported. Please stay tuned. -
Check the OCR configuration in the .env file. Check if
VE_OCR_AK
andVE_OCR_SK
are filled in correctly. If not configured, please disable the OCR function during document processing. -
Check the model (
BUILTIN_CM_
) configuration in the .env file. Features like Agent query-to-SQL, query rewriting, and smart tagging for image knowledge bases depend on this configuration. If you don't need these features, you can leave it unconfigured. Check it by referring to the content mentioned in the [Model Configuration] section. -
After confirming that all the above configurations are correct, restart the service and try again.
- Restart the service:
docker compose --profile "*" up -d
- Create a new knowledge base and upload a document.
- If it still fails, obtain the following two pieces of information and ask for help from developers in the user group / issues.
-
container_id=$(docker ps -aqf "name=coze-server") && docker logs "$container_id" 2>&1 | grep 'HandleMessage'
to get the document processing log.- If an error like
the num_rows (300) of field (dense_text_content) is not equal to passed num_rows (100): invalid parameter[expected=100][actual=300]
appears, it indicates a model dimension issue. Please check if the configured model supports dimension configuration and if the output dimension from a manual API call matches the expectation. No need to ask for help in this case. - If you see http timeout / connection refused errors, please check your container network and env configuration.
- If an error like
-
container_id=$(docker ps -aqf "name=coze-server") && docker exec -it $container_id /bin/sh
, then inside the container, runcat .env
to get the configuration information. Remember to redact sensitive keys.
-
- Restart the service:
5. Why does the model report an error when debugging after uploading an image/file in an Agent conversation or a workflow's large model node?
The image/file links accessed by the large model must be publicly accessible URLs. The image storage component needs to be deployed on the public network. For details, refer to: Upload Component Configuration
Yes, this is normal. The setup
-related containers are responsible for some script initialization tasks and will exit automatically after completion.
-
Problem Phenomenon: Users are unable to select or switch models in the model list when building Agents, workflows, or Coze applications. For example:
- Problem Cause: When configuring models, the developer has assigned the same ID to different models in different model files, causing a model ID conflict.
-
Solution:
- Open all model configuration YAML files under
backend/conf/model
. - Find the duplicate IDs and assign a new, unique non-zero integer ID to each model.
- Execute the command to restart the service:
docker compose --profile "*" up -d
- Open all model configuration YAML files under
When conversing with or debugging an Agent, or running a large model node in a workflow, if the model returns an "Internal server error" or similar messages, or if you see similar information in the system logs, it means there is an error in the model configuration file. Common error messages and corresponding log information for this scenario are as follows:
- Conversation error
Something error:Internal server error
, with a corresponding log message likestatus code: 404, status: 404 Not Found, message: Invalid URL (POST xxxx/chat/completions)
orcan't fetch endpoint sts token without endpoint
. - Error
connection refused
in the conversation or logs.
The above error messages indicate that the model configuration file is incorrect. It is recommended to check the YAML file configurations under backend/conf/model
by following these steps. For complete model configuration details, refer to the documentation 3. Model Configuration.
-
Check the general model configurations:
-
id: The
id
must be unique across all files, with no duplicates. -
base_url:
coze-server
typically uses SDKs provided by various model vendors to call models. Please configure this according to the specific model vendor's documentation. Note that Coze Studio does not requirebase_url
to include a suffix like/chat/completions
. -
api_key and model:
api_key
andmodel
must be correctly configured; otherwise, you cannot use them to call the model for conversations. -
YAML file: Check if the YAML syntax is valid. If it's invalid, the system will prevent
coze-server
from starting.
-
id: The
-
Check the specific configurations for each model:
Here is a checklist of model configurations that usually require special attention. For complete configuration methods, please refer to the documentation 3. Model Configuration.
Model/Platform Check Method OpenAI Check the by_azure
field configuration. If the model service is provided by Microsoft Azure, this parameter should be set totrue
.Ollama 1. Check base_url
:
1. If the container network mode isbridge
,localhost
inside thecoze-server
container is not the host'slocalhost
. You need to change it to the IP of the machine where Ollama is deployed, orhttp://host.docker.internal:11434
.
2. Do not add a path like/v1
after theip:port
.
2. Checkapi_key
: If no API key is set, leave this parameter empty.
3. Confirm that the firewall on the host machine where Ollama is deployed has opened port 11434.
4. Confirm that Ollama's network is configured for external exposure.
Alibaba Cloud Bailian Platform
The Alibaba Cloud Bailian platform is compatible with the OpenAI protocol for model calls. In this case:
* Setprotocol
toopenai
.
* Setbase_url
tohttps://dashscope.aliyuncs.com/compatible-mode/v1
.
* Configureapi_key
andmodel
with the values provided by Alibaba Cloud Bailian. -
After completing all checks, follow these steps to restart the service and try again.
- Execute the command to restart the service:
docker compose --profile "*" up -d
- Create a new Agent and start a conversation.
If the model still reports an error, follow the steps below to get the logs and ask for help from developers in the user group / issues. For detailed instructions on getting logs, refer to Service Logs.
docker logs coze-server | grep -i 'node execute failed'
- Execute the command to restart the service:
In the coze-studio
project, the code node comes with two third-party dependency libraries by default: httpx
and numpy
. Coze Studio also allows developers to add other third-party Python
libraries on their own. The detailed steps are as follows:
-
Modify configuration files. In the
./scripts/setup/[python.sh](python.sh)
script and the./backend/Dockerfile
file, you can find thethird-party libraries
comment. Simply add the correspondingpip install
command for the dependency directly below the third-party libraries comment in both files. For example, add version 2.0.0 oftorch
:# If you want to use other third - party libraries, you can install them here. pip install torch==2.0.0
-
Add the package names of third-party modules in
./backend/conf/workflow/config.yaml
. For example, to addtorch
:NodeOfCodeConfig: SupportThirdPartModules: - httpx - numpy - torch
-
Run the following command to restart the service.
docker compose --profile "*" up -d