Skip to content
newacc edited this page Aug 27, 2025 · 9 revisions

1. "coze-elasticsearch" container fails to start during deployment?

If the following error occurs during the deployment process: Image

Open docker/volumes/elasticsearch/setup_es.sh in a code editor (like VS Code). In the bottom-right corner of the editor, you will see a CRLF or LF indicator. Click it and select LF. Save the file and restart docker compose --profile '*' up -d. See also the related issue. Image

2. "Error response from daemon: Ports are not available: exposing port TCP http://0.0.0.0:2379 -> http://127.0.0.1:0" when deploying locally on Windows?

# Check port usage
netstat -ano | findstr :2379

net stop winnat
net start winnat

3. "Something error: Internal server error" during Agent conversation debugging?

You can query the specific error logs with the following command:

docker logs coze-server | grep -i 'node execute failed'

Image

4. Why does the knowledge base show "Processing 0%" after uploading a document?

When a text/table/image knowledge base is created successfully, but a file remains stuck at 'Processing' for a long time after being uploaded, as shown below:

Image

Please follow the steps below to check the base component configurations in the docker/.env file. For configuration details, refer to the documentation: 5. Base Component Configuration.

  1. Check the embedding configuration in the .env file.

    Model Check Method
    EMBEDDING_TYPE="openai" 1. OPENAI_EMBEDDING_BASE_URL: coze-server typically uses SDKs provided by various platform protocols to call APIs. In this case, it is not required to include the /embeddings suffix. Please configure it according to the specific API provider's documentation.
    2. OPENAI_EMBEDDING_BY_AZURE: For non-Microsoft Azure APIs, please set this to false.
    3. OPENAI_EMBEDDING_DIMS / OPENAI_EMBEDDING_REQUEST_DIMS:
        1. Please set both keys to the same numerical value based on the vector dimensions supported by the model.
        2. Some model APIs will report an error if vector dimensions are passed. In this case, you can comment out OPENAI_EMBEDDING_REQUEST_DIMS.
        3. Some models do not support setting vector dimensions. Please configure according to the model's documentation.
    EMBEDDING_TYPE="ark" 1. ARK_EMBEDDING_MODEL:
        1. When using a model name, you need to include the specific version, for example, doubao-embedding-large-text-250515. When using an endpoint (like ep-xxxx), copy it completely.
        2. Currently (as of 0728), image vectorization models (doubao-embedding-vision) are not supported. Please use text vectorization models. Support will be added soon.
    2. ARK_EMBEDDING_DIMS: Please refer to the official documentation to configure the supported embedding dimensions.
    EMBEDDING_TYPE="http" 1. The http protocol currently has its own independent input/output format. You can refer to https://github.com/coze-dev/coze-studio/blob/main/backend/infra/impl/embedding/http/http.go for encapsulation.
    2. Ollama is not currently supported. Please stay tuned.
  2. Check the OCR configuration in the .env file. Check if VE_OCR_AK and VE_OCR_SK are filled in correctly. If not configured, please disable the OCR function during document processing.

  3. Check the model (BUILTIN_CM_) configuration in the .env file. Features like Agent query-to-SQL, query rewriting, and smart tagging for image knowledge bases depend on this configuration. If you don't need these features, you can leave it unconfigured. Check it by referring to the content mentioned in the [Model Configuration] section.

  4. After confirming that all the above configurations are correct, restart the service and try again.

    1. Restart the service: docker compose --profile "*" up -d
    2. Create a new knowledge base and upload a document.
    3. If it still fails, obtain the following two pieces of information and ask for help from developers in the user group / issues.
      1. container_id=$(docker ps -aqf "name=coze-server") && docker logs "$container_id" 2>&1 | grep 'HandleMessage' to get the document processing log.
        1. If an error like the num_rows (300) of field (dense_text_content) is not equal to passed num_rows (100): invalid parameter[expected=100][actual=300] appears, it indicates a model dimension issue. Please check if the configured model supports dimension configuration and if the output dimension from a manual API call matches the expectation. No need to ask for help in this case.
        2. If you see http timeout / connection refused errors, please check your container network and env configuration.
      2. container_id=$(docker ps -aqf "name=coze-server") && docker exec -it $container_id /bin/sh, then inside the container, run cat .env to get the configuration information. Remember to redact sensitive keys.

5. Why does the model report an error when debugging after uploading an image/file in an Agent conversation or a workflow's large model node?

The image/file links accessed by the large model must be publicly accessible URLs. The image storage component needs to be deployed on the public network. For details, refer to: Upload Component Configuration

6. Is it normal for "setup"-related containers to be in an exited state after the service starts?

Yes, this is normal. The setup-related containers are responsible for some script initialization tasks and will exit automatically after completion.

Image

7. What to do if I can't switch models in the model list?

  • Problem Phenomenon: Users are unable to select or switch models in the model list when building Agents, workflows, or Coze applications. For example: Image
  • Problem Cause: When configuring models, the developer has assigned the same ID to different models in different model files, causing a model ID conflict.
  • Solution:
    1. Open all model configuration YAML files under backend/conf/model.
    2. Find the duplicate IDs and assign a new, unique non-zero integer ID to each model.
    3. Execute the command to restart the service: docker compose --profile "*" up -d

8. What to do if the model reports an error (e.g., "Internal server error")?

When conversing with or debugging an Agent, or running a large model node in a workflow, if the model returns an "Internal server error" or similar messages, or if you see similar information in the system logs, it means there is an error in the model configuration file. Common error messages and corresponding log information for this scenario are as follows:

  • Conversation error Something error:Internal server error, with a corresponding log message like status code: 404, status: 404 Not Found, message: Invalid URL (POST xxxx/chat/completions) or can't fetch endpoint sts token without endpoint.
  • Error connection refused in the conversation or logs.

The above error messages indicate that the model configuration file is incorrect. It is recommended to check the YAML file configurations under backend/conf/model by following these steps. For complete model configuration details, refer to the documentation 3. Model Configuration.

  1. Check the general model configurations:

    • id: The id must be unique across all files, with no duplicates.
    • base_url: coze-server typically uses SDKs provided by various model vendors to call models. Please configure this according to the specific model vendor's documentation. Note that Coze Studio does not require base_url to include a suffix like /chat/completions.
    • api_key and model: api_key and model must be correctly configured; otherwise, you cannot use them to call the model for conversations.
    • YAML file: Check if the YAML syntax is valid. If it's invalid, the system will prevent coze-server from starting.
  2. Check the specific configurations for each model:

    Here is a checklist of model configurations that usually require special attention. For complete configuration methods, please refer to the documentation 3. Model Configuration.

    Model/Platform Check Method
    OpenAI Check the by_azure field configuration. If the model service is provided by Microsoft Azure, this parameter should be set to true.
    Ollama 1. Check base_url:
        1. If the container network mode is bridge, localhost inside the coze-server container is not the host's localhost. You need to change it to the IP of the machine where Ollama is deployed, or http://host.docker.internal:11434.
        2. Do not add a path like /v1 after the ip:port.
    2. Check api_key: If no API key is set, leave this parameter empty.
    3. Confirm that the firewall on the host machine where Ollama is deployed has opened port 11434.
    4. Confirm that Ollama's network is configured for external exposure.

    Image
    Alibaba Cloud Bailian Platform
    The Alibaba Cloud Bailian platform is compatible with the OpenAI protocol for model calls. In this case:

    * Set protocol to openai.
    * Set base_url to https://dashscope.aliyuncs.com/compatible-mode/v1.
    * Configure api_key and model with the values provided by Alibaba Cloud Bailian.
  3. After completing all checks, follow these steps to restart the service and try again.

    1. Execute the command to restart the service: docker compose --profile "*" up -d
    2. Create a new Agent and start a conversation. If the model still reports an error, follow the steps below to get the logs and ask for help from developers in the user group / issues. For detailed instructions on getting logs, refer to Service Logs.
      docker logs coze-server | grep -i 'node execute failed'

9、How to add Python third-party libraries to a workflow code node

In the coze-studio project, the code node comes with two third-party dependency libraries by default: httpx and numpy. Coze Studio also allows developers to add other third-party Python libraries on their own. The detailed steps are as follows:

  1. Modify configuration files. In the ./scripts/setup/[python.sh](python.sh) script and the ./backend/Dockerfile file, you can find the third-party libraries comment. Simply add the corresponding pip install command for the dependency directly below the third-party libraries comment in both files. For example, add version 2.0.0 of torch:

    # If you want to use other third - party libraries, you can install them here.
    pip install torch==2.0.0
  2. Add the package names of third-party modules in ./backend/conf/workflow/config.yaml. For example, to add torch:

    NodeOfCodeConfig:
        SupportThirdPartModules:
            - httpx
            - numpy
            - torch
  3. Run the following command to restart the service.

    docker compose --profile "*" up -d
Clone this wiki locally