Skip to content

Thread runs sometimes throws HttpException, Graph 'xxx' not found #3120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
4 tasks done
Jackoder opened this issue Jan 21, 2025 · 8 comments
Closed
4 tasks done

Thread runs sometimes throws HttpException, Graph 'xxx' not found #3120

Jackoder opened this issue Jan 21, 2025 · 8 comments

Comments

@Jackoder
Copy link

Checked other resources

  • This is a bug, not a usage question. For questions, please use GitHub Discussions.
  • I added a clear and detailed title that summarizes the issue.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.

Example Code

import requests
import time
import logging

# Configure logging
logging.basicConfig(
    filename='thread_process.log',
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)

console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)

def log_request_response(response):
    logging.info(f"Request URL: {response.request.url}")
    logging.info(f"Request Method: {response.request.method}")
    logging.info(f"Request Headers: {response.request.headers}")
    if response.request.body:
        logging.info(f"Request Body: {response.request.body}")
    logging.info(f"Response Status Code: {response.status_code}")
    logging.info(f"Response Headers: {response.headers}")
    logging.info(f"Response Body: {response.text}")

import uuid

def create_thread():
    # thread_id = str(uuid.uuid4())  # Generate a UUID for thread_id
    url = "http://127.0.0.1:8123/threads"
    payload = {
    }
    response = requests.post(url, json=payload)
    log_request_response(response)
    response.raise_for_status()
    thread_id = response.json()["thread_id"]
    logging.info(f"Thread created with ID: {thread_id}")
    return thread_id

def run_thread(thread_id):
    url = f"http://127.0.0.1:8123/threads/{thread_id}/runs/wait"
    payload = {
        "assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d",
        "input": {
            "knowledge": "Camera",
            "node": "Rule"
        },
        "after_seconds": 1
    }
    response = requests.post(url, json=payload)
    log_request_response(response)
    response.raise_for_status()
    run_id = response.json()["run_id"]
    logging.info(f"Thread run initiated with Run ID: {run_id}")
    return run_id

def get_thread_status(thread_id):
    url = f"http://127.0.0.1:8123/threads/{thread_id}/runs"
    response = requests.get(url)
    log_request_response(response)
    response.raise_for_status()
    status = response.json()[0]["status"]
    logging.info(f"Current status: {status}")
    return status

def get_thread_result(thread_id):
    url = f"http://127.0.0.1:8123/threads/{thread_id}"
    response = requests.get(url)
    log_request_response(response)
    response.raise_for_status()
    result = response.json()
    logging.info(f"Thread result: {result}")
    return result

def execute_test():
    try:
        thread_id = create_thread()
        retry_count = 0
        run_id = run_thread(thread_id)
        status = get_thread_status(thread_id)
        while status != "success":
            if status == "error":
                retry_count += 1
                if retry_count > 3:
                    logging.error("Thread run failed after 3 retries.")
                    return False
                logging.warning(f"Thread run encountered an error. Retrying {retry_count}/3...")
                run_id = run_thread(thread_id)  # Retry running the thread
            # time.sleep(2)
            status = get_thread_status(thread_id)
        result = get_thread_result(thread_id)
        return True
    except Exception as e:
        logging.error(f"Exception occurred: {e}")
        return False

def main():
    success_count = 0
    total_runs = 10
    for i in range(total_runs):
        logging.info(f"Starting test run {i+1}/{total_runs}")
        if execute_test():
            success_count += 1
        logging.info(f"Test run {i+1}/{total_runs} completed")

    success_rate = (success_count / total_runs) * 100
    logging.info(f"Success rate: {success_rate}%")

if __name__ == "__main__":
    main()

Error Message and Stack Trace (if applicable)

2025-01-21 12:33:33,813 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-01-21 12:33:33,813 - INFO - Request Method: POST
2025-01-21 12:33:33,813 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-01-21 12:33:33,813 - INFO - Request Body: b'{}'
2025-01-21 12:33:33,813 - INFO - Response Status Code: 200
2025-01-21 12:33:33,813 - INFO - Response Headers: {'date': 'Tue, 21 Jan 2025 04:33:33 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-01-21 12:33:33,813 - INFO - Response Body: {"thread_id":"2a6608cc-ec49-41f6-8926-a249dda7ff97","created_at":"2025-01-21T12:33:34.919186+08:00","updated_at":"2025-01-21T12:33:34.919186+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-01-21 12:33:33,813 - INFO - Thread created with ID: 2a6608cc-ec49-41f6-8926-a249dda7ff97
2025-01-21 12:33:35,551 - INFO - Request URL: http://127.0.0.1:8123/threads/2a6608cc-ec49-41f6-8926-a249dda7ff97/runs/wait
2025-01-21 12:33:35,551 - INFO - Request Method: POST
2025-01-21 12:33:35,552 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-01-21 12:33:35,552 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-01-21 12:33:35,552 - INFO - Response Status Code: 200
2025-01-21 12:33:35,552 - INFO - Response Headers: {'date': 'Tue, 21 Jan 2025 04:33:33 GMT', 'server': 'uvicorn', 'location': '/threads/2a6608cc-ec49-41f6-8926-a249dda7ff97/runs/1efd7b0d-fa56-665c-a80c-a655ff544eea/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-01-21 12:33:35,553 - INFO - Response Body: {"__error__":{"error":"HTTPException","message":"404: Graph 'rule' not found"}}
2025-01-21 12:33:35,553 - ERROR - Exception occurred: 'run_id'

Description

After I finished deploying using Docker, the Runs API sometimes throws a Graph 'rule' not found error, with an error rate of about 20%. Is there a problem somewhere, and how should it be resolved?

https://langchain-ai.github.io/langgraph/how-tos/deploy-self-hosted/#using-docker

System Info

System Information

OS: Linux
OS Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Python Version: 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 9.4.0]

Package Information

langchain_core: 0.3.30
langchain: 0.3.12
langchain_community: 0.3.10
langsmith: 0.1.147
langchain_anthropic: 0.3.0
langchain_fireworks: 0.2.5
langchain_openai: 0.3.0
langchain_text_splitters: 0.3.3
langchainhub: 0.1.21
langgraph_api: 0.0.15
langgraph_cli: 0.1.65
langgraph_license: Installed. No version info available.
langgraph_sdk: 0.1.48
langgraph_storage: Installed. No version info available.
langserve: 0.3.0

Other Dependencies

aiohttp: 3.11.10
anthropic: 0.40.0
async-timeout: Installed. No version info available.
click: 8.1.7
cryptography: 43.0.3
dataclasses-json: 0.6.7
defusedxml: 0.7.1
fastapi: 0.115.6
fireworks-ai: 0.15.10
httpx: 0.28.1
httpx-sse: 0.4.0
jsonpatch: 1.33
jsonschema-rs: 0.25.1
langgraph: 0.2.63
langgraph-checkpoint: 2.0.10
langsmith-pyo3: Installed. No version info available.
numpy: 1.26.4
openai: 1.59.7
orjson: 3.10.12
packaging: 24.2
pydantic: 2.10.3
pydantic-settings: 2.6.1
pyjwt: 2.10.1
python-dotenv: 1.0.1
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.36
sse-starlette: 2.1.3
starlette: 0.41.3
structlog: 24.4.0
tenacity: 8.5.0
tiktoken: 0.8.0
types-requests: 2.32.0.20241016
typing-extensions: 4.12.2
uvicorn: 0.32.1
watchfiles: 1.0.0

@andrewnguonly
Copy link
Contributor

@Jackoder, can you share the LangGraph Server logs (from Docker)?

@Jackoder
Copy link
Author

Jackoder commented Feb 5, 2025

@Jackoder, can you share the LangGraph Server logs (from Docker)?

@andrewnguonly Thanks, there are logs

  • Server launch logs
2025-02-05 10:56:50 2025-02-05T02:56:50.949162Z [info     ] Using auth of type=noop        [langgraph_api.auth.middleware] api_revision=6148670 api_variant=local
2025-02-05 10:56:50 2025-02-05T02:56:50.950547Z [info     ] Started server process [1]     [uvicorn.error] api_revision=6148670 api_variant=local color_message='Started server process [\x1b[36m%d\x1b[0m]'
2025-02-05 10:56:50 2025-02-05T02:56:50.950816Z [info     ] Waiting for application startup. [uvicorn.error] api_revision=6148670 api_variant=local
2025-02-05 10:56:50 2025-02-05T02:56:50.951128Z [warning  ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=6148670 api_variant=local
2025-02-05 10:56:54 2025-02-05T02:56:54.902691Z [info     ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:56:55 2025-02-05T02:56:55.093882Z [info     ] No LANGGRAPH_STORE configuration found, using default configuration [langgraph_storage.database] api_revision=6148670 api_variant=local
2025-02-05 10:56:55 2025-02-05T02:56:55.094619Z [info     ] Postgres pool stats            [langgraph_storage.database] api_revision=6148670 api_variant=local connections_ms=122 connections_num=1 pool_available=1 pool_max=150 pool_min=1 pool_size=1 requests_num=1 requests_waiting=0 usage_ms=21
2025-02-05 10:56:55 2025-02-05T02:56:55.196788Z [info     ] Redis pool stats               [langgraph_storage.redis] api_revision=6148670 api_variant=local idle_connections=1 in_use_connections=0 max_connections=500
2025-02-05 10:57:14 2025-02-05T02:57:14.410602Z [warning  ] Failed to get info from https://api.smith.langchain.com: LangSmithConnectionError('Connection error caused failure to GET /info in LangSmith API. Please confirm your internet connection. SSLError(MaxRetryError("HTTPSConnectionPool(host=\'api.smith.langchain.com\', port=443): Max retries exceeded with url: /info (Caused by SSLError(SSLEOFError(8, \'[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1018)\')))"))\nContent-Length: None\nAPI Key: lsv2_********************************************22') [langsmith.client] api_revision=6148670 api_variant=local
2025-02-05 10:57:29 2025-02-05T02:57:29.652929Z [info     ] Registering graph with id 'song' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=song
2025-02-05 10:57:39 2025-02-05T02:57:39.037043Z [info     ] Registering graph with id 'react_agent' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=react_agent
2025-02-05 10:57:39 2025-02-05T02:57:39.060521Z [info     ] Registering graph with id 'story' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=story
2025-02-05 10:57:39 2025-02-05T02:57:39.084546Z [info     ] Registering graph with id 'storyboard' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=storyboard
2025-02-05 10:57:39 2025-02-05T02:57:39.121216Z [info     ] Registering graph with id 'intention' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=intention
2025-02-05 10:57:39 2025-02-05T02:57:39.151798Z [info     ] Registering graph with id 'character' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=character
2025-02-05 10:57:39 2025-02-05T02:57:39.179637Z [info     ] Registering graph with id 'scene' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=scene
2025-02-05 10:57:39 2025-02-05T02:57:39.208682Z [info     ] Registering graph with id 'camera' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=camera
2025-02-05 10:57:39 2025-02-05T02:57:39.265169Z [info     ] Registering graph with id 'rule' [langgraph_api.graph] api_revision=6148670 api_variant=local graph_id=rule
2025-02-05 10:57:39 2025-02-05T02:57:39.286541Z [info     ] Starting metadata loop         [langgraph_api.metadata] api_revision=6148670 api_variant=local
2025-02-05 10:57:39 2025-02-05T02:57:39.287240Z [info     ] Application startup complete.  [uvicorn.error] api_revision=6148670 api_variant=local
2025-02-05 10:57:39 2025-02-05T02:57:39.287314Z [info     ] Starting 10 background workers [langgraph_api.queue] api_revision=6148670 api_variant=local
2025-02-05 10:57:39 2025-02-05T02:57:39.288067Z [info     ] Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) [uvicorn.error] api_revision=6148670 api_variant=local color_message='Uvicorn running on \x1b[1m%s://%s:%d\x1b[0m (Press CTRL+C to quit)'
2025-02-05 10:57:39 2025-02-05T02:57:39.288448Z [info     ] Worker stats                   [langgraph_api.queue] active=0 api_revision=6148670 api_variant=local available=10 max=10
2025-02-05 10:57:39 2025-02-05T02:57:39.330227Z [info     ] Queue stats                    [langgraph_api.queue] api_revision=6148670 api_variant=local med_age_secs=None min_age_secs=None n_pending=0
2025-02-05 10:57:39 2025-02-05T02:57:39.767395Z [info     ] HTTP Request: POST https://api.smith.langchain.com/v1/metadata/submit "HTTP/1.1 204 No Content" [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:57:42 2025-02-05T02:57:42.059771Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:57:47 2025-02-05T02:57:47.237808Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:57:52 2025-02-05T02:57:52.424989Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:57:55 2025-02-05T02:57:55.092408Z [info     ] Postgres pool stats            [langgraph_storage.database] api_revision=6148670 api_variant=local pool_available=0 pool_max=150 pool_min=1 pool_size=1 requests_num=38 requests_waiting=0 usage_ms=1986
2025-02-05 10:57:55 2025-02-05T02:57:55.195386Z [info     ] Redis pool stats               [langgraph_storage.redis] api_revision=6148670 api_variant=local idle_connections=1 in_use_connections=0 max_connections=500
2025-02-05 10:57:57 2025-02-05T02:57:57.606542Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
  • Test script logs

It thrown twice errors of Response Body: {"__error__":{"error":"HTTPException","message":"404: Graph 'rule' not found"}} in 5 runs

2025-02-05 10:47:03,175 - INFO - Starting test run 1/5
2025-02-05 10:47:03,225 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-02-05 10:47:03,226 - INFO - Request Method: POST
2025-02-05 10:47:03,226 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-02-05 10:47:03,226 - INFO - Request Body: b'{}'
2025-02-05 10:47:03,226 - INFO - Response Status Code: 200
2025-02-05 10:47:03,226 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:02 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-02-05 10:47:03,226 - INFO - Response Body: {"thread_id":"bd8084f7-953f-4e91-a280-ec28bad06acb","created_at":"2025-02-05T10:47:04.004565+08:00","updated_at":"2025-02-05T10:47:04.004565+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-02-05 10:47:03,226 - INFO - Thread created with ID: bd8084f7-953f-4e91-a280-ec28bad06acb
2025-02-05 10:47:08,416 - INFO - Request URL: http://127.0.0.1:8123/threads/bd8084f7-953f-4e91-a280-ec28bad06acb/runs/wait
2025-02-05 10:47:08,416 - INFO - Request Method: POST
2025-02-05 10:47:08,416 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-02-05 10:47:08,416 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-02-05 10:47:08,416 - INFO - Response Status Code: 200
2025-02-05 10:47:08,416 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:02 GMT', 'server': 'uvicorn', 'location': '/threads/bd8084f7-953f-4e91-a280-ec28bad06acb/runs/1efe36b7-ac17-6434-a83b-3573a638bcd1/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-02-05 10:47:08,416 - INFO - Response Body: 
{"messages":[{"content":"检索知识:content=None, knowledge=Camera。\n检索节点规则:subgraph=None, node=Rule。","additional_kwargs":{},"response_metadata":{},"type":"human","name":null,"id":"747deb57-818b-4390-b785-b341458fe395","example":false},{"content":"","additional_kwargs":{"tool_calls":[{"id":"call_Q1viGajN9qjYKeRwhdg6ohQW","function":{"arguments":"{\"content\": null, \"knowledge\": \"Camera\"}","name":"Retrieval-Knowledge"},"type":"function"},{"id":"call_lamPKTuEGnXsYrncFmjgnMxy","function":{"arguments":"{\"subgraph\": null, \"node\": \"Rule\"}","name":"Query-Rules"},"type":"function"}],"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":58,"prompt_tokens":103,"total_tokens":161,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"tool_calls","logprobs":null},"type":"ai","name":null,"id":"run-588e7f58-4b7f-40b3-bd92-5ac6bfd05b81-0","example":false,"tool_calls":[{"name":"Retrieval-Knowledge","args":{"content":null,"knowledge":"Camera"},"id":"call_Q1viGajN9qjYKeRwhdg6ohQW","type":"tool_call"},{"name":"Query-Rules","args":{"subgraph":null,"node":"Rule"},"id":"call_lamPKTuEGnXsYrncFmjgnMxy","type":"tool_call"}],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":103,"output_tokens":58,"total_tokens":161,"input_token_details":{},"output_token_details":{}}},{"content":"完成节点知识检索:None","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Retrieval-Knowledge","id":"93ab3db3-8907-4c93-a4f1-1a1266002a24","tool_call_id":"call_Q1viGajN9qjYKeRwhdg6ohQW","artifact":null,"status":"success"},{"content":"完成节点(None/Rule)规则查询","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Query-Rules","id":"fcda4b9c-9b5d-4720-842a-dda9dab97bef","tool_call_id":"call_lamPKTuEGnXsYrncFmjgnMxy","artifact":null,"status":"success"},{"content":"检索结果:\n\n1. 知识检索(Camera):完成节点知识检索,结果为 None。\n2. 节点规则查询(Rule):完成节点 (None/Rule) 规则查询。","additional_kwargs":{"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":46,"prompt_tokens":243,"total_tokens":289,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"stop","logprobs":null},"type":"ai","name":null,"id":"run-e6015717-4c73-48dd-a8e9-62ddcc2270e3-0","example":false,"tool_calls":[],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":243,"output_tokens":46,"total_tokens":289,"input_token_details":{},"output_token_details":{}}}],"knowledge":"Camera","node":"Rule","chunks":[],"rules":[]}
2025-02-05 10:47:08,416 - ERROR - Exception occurred: 'run_id'
2025-02-05 10:47:08,416 - INFO - Test run 1/5 completed
2025-02-05 10:47:08,416 - INFO - Starting test run 2/5
2025-02-05 10:47:08,463 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-02-05 10:47:08,463 - INFO - Request Method: POST
2025-02-05 10:47:08,463 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-02-05 10:47:08,464 - INFO - Request Body: b'{}'
2025-02-05 10:47:08,464 - INFO - Response Status Code: 200
2025-02-05 10:47:08,464 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:07 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-02-05 10:47:08,464 - INFO - Response Body: {"thread_id":"50ac398a-8533-4e57-818f-a30f2bc38aa3","created_at":"2025-02-05T10:47:09.244095+08:00","updated_at":"2025-02-05T10:47:09.244095+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-02-05 10:47:08,464 - INFO - Thread created with ID: 50ac398a-8533-4e57-818f-a30f2bc38aa3
2025-02-05 10:47:12,093 - INFO - Request URL: http://127.0.0.1:8123/threads/50ac398a-8533-4e57-818f-a30f2bc38aa3/runs/wait
2025-02-05 10:47:12,093 - INFO - Request Method: POST
2025-02-05 10:47:12,093 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-02-05 10:47:12,093 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-02-05 10:47:12,093 - INFO - Response Status Code: 200
2025-02-05 10:47:12,093 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:08 GMT', 'server': 'uvicorn', 'location': '/threads/50ac398a-8533-4e57-818f-a30f2bc38aa3/runs/1efe36b7-de0a-6183-8fcf-b11fe2f9ba63/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-02-05 10:47:12,093 - INFO - Response Body: {"messages":[{"content":"检索知识:content=None, knowledge=Camera。\n检索节点规则:subgraph=None, node=Rule。","additional_kwargs":{},"response_metadata":{},"type":"human","name":null,"id":"d763c104-3940-46ab-b706-dafbfb1f2045","example":false},{"content":"","additional_kwargs":{"tool_calls":[{"id":"call_ISUXKQYiuXeEXij6113IfGJX","function":{"arguments":"{\"content\": null, \"knowledge\": \"Camera\"}","name":"Retrieval-Knowledge"},"type":"function"},{"id":"call_tLmkFagliQlTU8PcvIiYBUOK","function":{"arguments":"{\"subgraph\": null, \"node\": \"Rule\"}","name":"Query-Rules"},"type":"function"}],"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":58,"prompt_tokens":103,"total_tokens":161,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"tool_calls","logprobs":null},"type":"ai","name":null,"id":"run-650770dc-2e8c-428b-9d06-f7453dbfe152-0","example":false,"tool_calls":[{"name":"Retrieval-Knowledge","args":{"content":null,"knowledge":"Camera"},"id":"call_ISUXKQYiuXeEXij6113IfGJX","type":"tool_call"},{"name":"Query-Rules","args":{"subgraph":null,"node":"Rule"},"id":"call_tLmkFagliQlTU8PcvIiYBUOK","type":"tool_call"}],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":103,"output_tokens":58,"total_tokens":161,"input_token_details":{},"output_token_details":{}}},{"content":"完成节点知识检索:None","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Retrieval-Knowledge","id":"4c434f37-4716-4e1a-90c8-b8fd22752f55","tool_call_id":"call_ISUXKQYiuXeEXij6113IfGJX","artifact":null,"status":"success"},{"content":"完成节点(None/Rule)规则查询","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Query-Rules","id":"e32c3d9a-1d0d-4755-bcf2-d4c2743ad184","tool_call_id":"call_tLmkFagliQlTU8PcvIiYBUOK","artifact":null,"status":"success"},{"content":"检索知识和节点规则的查询已完成:\n\n1. 知识检索结果:未能找到与“Camera”相关的知识。\n2. 节点规则查询结果:已完成对节点(None/Rule)的规则查询。","additional_kwargs":{"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":53,"prompt_tokens":243,"total_tokens":296,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"stop","logprobs":null},"type":"ai","name":null,"id":"run-c49a7ee9-fcc6-42f8-9535-ef254d67550c-0","example":false,"tool_calls":[],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":243,"output_tokens":53,"total_tokens":296,"input_token_details":{},"output_token_details":{}}}],"knowledge":"Camera","node":"Rule","chunks":[],"rules":[]}
2025-02-05 10:47:12,093 - ERROR - Exception occurred: 'run_id'
2025-02-05 10:47:12,093 - INFO - Test run 2/5 completed
2025-02-05 10:47:12,093 - INFO - Starting test run 3/5
2025-02-05 10:47:12,118 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-02-05 10:47:12,118 - INFO - Request Method: POST
2025-02-05 10:47:12,118 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-02-05 10:47:12,118 - INFO - Request Body: b'{}'
2025-02-05 10:47:12,118 - INFO - Response Status Code: 200
2025-02-05 10:47:12,118 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:11 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-02-05 10:47:12,118 - INFO - Response Body: {"thread_id":"23ea25c6-3fe8-4440-9c56-15064bc28756","created_at":"2025-02-05T10:47:12.898690+08:00","updated_at":"2025-02-05T10:47:12.898690+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-02-05 10:47:12,118 - INFO - Thread created with ID: 23ea25c6-3fe8-4440-9c56-15064bc28756
2025-02-05 10:47:16,309 - INFO - Request URL: http://127.0.0.1:8123/threads/23ea25c6-3fe8-4440-9c56-15064bc28756/runs/wait
2025-02-05 10:47:16,309 - INFO - Request Method: POST
2025-02-05 10:47:16,309 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-02-05 10:47:16,309 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-02-05 10:47:16,309 - INFO - Response Status Code: 200
2025-02-05 10:47:16,309 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:11 GMT', 'server': 'uvicorn', 'location': '/threads/23ea25c6-3fe8-4440-9c56-15064bc28756/runs/1efe36b8-00e4-6dc7-b79b-749ce95dec81/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-02-05 10:47:16,309 - INFO - Response Body: {"messages":[{"content":"检索知识:content=None, knowledge=Camera。\n检索节点规则:subgraph=None, node=Rule。","additional_kwargs":{},"response_metadata":{},"type":"human","name":null,"id":"73831e6d-bfac-40c3-b110-303b23b366fd","example":false},{"content":"","additional_kwargs":{"tool_calls":[{"id":"call_CXNHiA2ej1UGzpOaEP02mq1V","function":{"arguments":"{\"content\": null, \"knowledge\": \"Camera\"}","name":"Retrieval-Knowledge"},"type":"function"},{"id":"call_VfMaT2pqdJdjxrQBiuvPa6oI","function":{"arguments":"{\"subgraph\": null, \"node\": \"Rule\"}","name":"Query-Rules"},"type":"function"}],"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":58,"prompt_tokens":103,"total_tokens":161,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"tool_calls","logprobs":null},"type":"ai","name":null,"id":"run-6da1b47b-05b0-4a05-a8bd-f37a29562d9c-0","example":false,"tool_calls":[{"name":"Retrieval-Knowledge","args":{"content":null,"knowledge":"Camera"},"id":"call_CXNHiA2ej1UGzpOaEP02mq1V","type":"tool_call"},{"name":"Query-Rules","args":{"subgraph":null,"node":"Rule"},"id":"call_VfMaT2pqdJdjxrQBiuvPa6oI","type":"tool_call"}],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":103,"output_tokens":58,"total_tokens":161,"input_token_details":{},"output_token_details":{}}},{"content":"完成节点知识检索:None","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Retrieval-Knowledge","id":"500daae0-4d3f-4500-9545-af53120d1484","tool_call_id":"call_CXNHiA2ej1UGzpOaEP02mq1V","artifact":null,"status":"success"},{"content":"完成节点(None/Rule)规则查询","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"Query-Rules","id":"9217ba4b-a752-45df-98eb-63954828fe19","tool_call_id":"call_VfMaT2pqdJdjxrQBiuvPa6oI","artifact":null,"status":"success"},{"content":"检索结果:\n\n1. 知识检索(Camera):完成节点知识检索,结果为 None。\n2. 节点规则查询(Rule):完成节点 (None/Rule) 规则查询。","additional_kwargs":{"refusal":null},"response_metadata":{"token_usage":{"completion_tokens":46,"prompt_tokens":243,"total_tokens":289,"completion_tokens_details":null,"prompt_tokens_details":null},"model_name":"gpt-4o-2024-08-06","system_fingerprint":"fp_f3927aa00d","finish_reason":"stop","logprobs":null},"type":"ai","name":null,"id":"run-d4eb4d19-78a7-4c5f-a062-ad32d4377c7b-0","example":false,"tool_calls":[],"invalid_tool_calls":[],"usage_metadata":{"input_tokens":243,"output_tokens":46,"total_tokens":289,"input_token_details":{},"output_token_details":{}}}],"knowledge":"Camera","node":"Rule","chunks":[],"rules":[]}
2025-02-05 10:47:16,309 - ERROR - Exception occurred: 'run_id'
2025-02-05 10:47:16,310 - INFO - Test run 3/5 completed
2025-02-05 10:47:16,310 - INFO - Starting test run 4/5
2025-02-05 10:47:16,354 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-02-05 10:47:16,354 - INFO - Request Method: POST
2025-02-05 10:47:16,354 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-02-05 10:47:16,354 - INFO - Request Body: b'{}'
2025-02-05 10:47:16,354 - INFO - Response Status Code: 200
2025-02-05 10:47:16,354 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:15 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-02-05 10:47:16,354 - INFO - Response Body: {"thread_id":"a98e1723-9a98-4e3b-8095-dc79a88943ea","created_at":"2025-02-05T10:47:17.135094+08:00","updated_at":"2025-02-05T10:47:17.135094+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-02-05 10:47:16,354 - INFO - Thread created with ID: a98e1723-9a98-4e3b-8095-dc79a88943ea
2025-02-05 10:47:17,724 - INFO - Request URL: http://127.0.0.1:8123/threads/a98e1723-9a98-4e3b-8095-dc79a88943ea/runs/wait
2025-02-05 10:47:17,724 - INFO - Request Method: POST
2025-02-05 10:47:17,724 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-02-05 10:47:17,724 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-02-05 10:47:17,724 - INFO - Response Status Code: 200
2025-02-05 10:47:17,724 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:15 GMT', 'server': 'uvicorn', 'location': '/threads/a98e1723-9a98-4e3b-8095-dc79a88943ea/runs/1efe36b8-2949-6b8e-b238-7f583356ec8c/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-02-05 10:47:17,724 - INFO - Response Body: {"__error__":{"error":"HTTPException","message":"404: Graph 'rule' not found"}}
2025-02-05 10:47:17,724 - ERROR - Exception occurred: 'run_id'
2025-02-05 10:47:17,725 - INFO - Test run 4/5 completed
2025-02-05 10:47:17,725 - INFO - Starting test run 5/5
2025-02-05 10:47:17,749 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-02-05 10:47:17,749 - INFO - Request Method: POST
2025-02-05 10:47:17,749 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-02-05 10:47:17,749 - INFO - Request Body: b'{}'
2025-02-05 10:47:17,749 - INFO - Response Status Code: 200
2025-02-05 10:47:17,749 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:17 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-02-05 10:47:17,750 - INFO - Response Body: {"thread_id":"053ee47f-3bc3-4f21-a343-9ae8096ed387","created_at":"2025-02-05T10:47:18.530148+08:00","updated_at":"2025-02-05T10:47:18.530148+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-02-05 10:47:17,750 - INFO - Thread created with ID: 053ee47f-3bc3-4f21-a343-9ae8096ed387
2025-02-05 10:47:19,100 - INFO - Request URL: http://127.0.0.1:8123/threads/053ee47f-3bc3-4f21-a343-9ae8096ed387/runs/wait
2025-02-05 10:47:19,100 - INFO - Request Method: POST
2025-02-05 10:47:19,100 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-02-05 10:47:19,100 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-02-05 10:47:19,100 - INFO - Response Status Code: 200
2025-02-05 10:47:19,100 - INFO - Response Headers: {'date': 'Wed, 05 Feb 2025 02:47:17 GMT', 'server': 'uvicorn', 'location': '/threads/053ee47f-3bc3-4f21-a343-9ae8096ed387/runs/1efe36b8-3698-6ff4-b36b-93809f3a801b/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-02-05 10:47:19,100 - INFO - Response Body: {"__error__":{"error":"HTTPException","message":"404: Graph 'rule' not found"}}
2025-02-05 10:47:19,101 - ERROR - Exception occurred: 'run_id'
2025-02-05 10:47:19,101 - INFO - Test run 5/5 completed
  • docker logs
2025-02-05 10:47:01 2025-02-05T02:47:01.283169Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:47:03 2025-02-05T02:47:03.225826Z [info     ] POST /threads 200 48ms         [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=48 method=POST path=/threads path_params={} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '2', 'content-type': 'application/json'} res_header={'content-length': '220', 'content-type': 'application/json'} route=/threads status=200
2025-02-05 10:47:03 2025-02-05T02:47:03.283180Z [info     ] Created run                    [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b7-ac17-6434-a83b-3573a638bcd1 thread_id=bd8084f7-953f-4e91-a280-ec28bad06acb
2025-02-05 10:47:03 2025-02-05T02:47:03.283691Z [info     ] Joined run stream              [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b7-ac17-6434-a83b-3573a638bcd1 thread_id=bd8084f7-953f-4e91-a280-ec28bad06acb
2025-02-05 10:47:04 2025-02-05T02:47:04.418441Z [info     ] Starting background run        [langgraph_api.queue] api_revision=6148670 api_variant=local run_attempt=1 run_created_at=2025-02-05T10:47:05.058415+08:00 run_id=1efe36b7-ac17-6434-a83b-3573a638bcd1 run_queue_ms=-640 run_started_at=2025-02-05T02:47:04.418007+00:00
2025-02-05 10:47:06 2025-02-05T02:47:06.460961Z [info     ] GET /ok 200 1ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=1 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:47:06 2025-02-05T02:47:06.809050Z [info     ] HTTP Request: POST https://***/openai/v1/chat/completions "HTTP/1.1 200 " [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:47:06 None/Camera 查询知识失败: {"success":false,"err_code":"E0001","err_msg":"body.messages.str:Input should be a valid string;body.messages.list[str]:Input should be a valid list;","data":null}
2025-02-05 10:47:08 2025-02-05T02:47:08.221998Z [info     ] HTTP Request: POST https://***/openai/v1/chat/completions "HTTP/1.1 200 " [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:47:08 2025-02-05T02:47:08.269256Z [info     ] Background run succeeded       [langgraph_api.queue] api_revision=6148670 api_variant=local run_attempt=1 run_created_at=2025-02-05T10:47:05.058415+08:00 run_ended_at=2025-02-05T02:47:08.269096+00:00 run_exec_ms=3851 run_id=1efe36b7-ac17-6434-a83b-3573a638bcd1 run_started_at=2025-02-05T02:47:04.418007+00:00
2025-02-05 10:47:08 2025-02-05T02:47:08.416388Z [info     ] POST /threads/bd8084f7-953f-4e91-a280-ec28bad06acb/runs/wait 200 5188ms [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=5188 method=POST path=/threads/bd8084f7-953f-4e91-a280-ec28bad06acb/runs/wait path_params={'thread_id': 'bd8084f7-953f-4e91-a280-ec28bad06acb'} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '126', 'content-type': 'application/json'} res_header={'location': '/threads/bd8084f7-953f-4e91-a280-ec28bad06acb/runs/1efe36b7-ac17-6434-a83b-3573a638bcd1/join', 'content-type': 'application/json'} route=/threads/{thread_id}/runs/wait status=200
2025-02-05 10:47:08 2025-02-05T02:47:08.463689Z [info     ] POST /threads 200 45ms         [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=45 method=POST path=/threads path_params={} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '2', 'content-type': 'application/json'} res_header={'content-length': '220', 'content-type': 'application/json'} route=/threads status=200
2025-02-05 10:47:08 2025-02-05T02:47:08.508933Z [info     ] Created run                    [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b7-de0a-6183-8fcf-b11fe2f9ba63 thread_id=50ac398a-8533-4e57-818f-a30f2bc38aa3
2025-02-05 10:47:08 2025-02-05T02:47:08.543490Z [info     ] Joined run stream              [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b7-de0a-6183-8fcf-b11fe2f9ba63 thread_id=50ac398a-8533-4e57-818f-a30f2bc38aa3
2025-02-05 10:47:09 2025-02-05T02:47:09.653082Z [info     ] Starting background run        [langgraph_api.queue] api_revision=6148670 api_variant=local run_attempt=1 run_created_at=2025-02-05T10:47:10.289520+08:00 run_id=1efe36b7-de0a-6183-8fcf-b11fe2f9ba63 run_queue_ms=-636 run_started_at=2025-02-05T02:47:09.652809+00:00
2025-02-05 10:47:10 2025-02-05T02:47:10.677751Z [info     ] HTTP Request: POST https://***/openai/v1/chat/completions "HTTP/1.1 200 " [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:47:10 None/Camera 查询知识失败: {"success":false,"err_code":"E0001","err_msg":"body.messages.str:Input should be a valid string;body.messages.list[str]:Input should be a valid list;","data":null}
2025-02-05 10:47:11 2025-02-05T02:47:11.639207Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:47:11 2025-02-05T02:47:11.920301Z [info     ] HTTP Request: POST https://***/openai/v1/chat/completions "HTTP/1.1 200 " [httpx] api_revision=6148670 api_variant=local
2025-02-05 10:47:11 2025-02-05T02:47:11.964615Z [info     ] Background run succeeded       [langgraph_api.queue] api_revision=6148670 api_variant=local run_attempt=1 run_created_at=2025-02-05T10:47:10.289520+08:00 run_ended_at=2025-02-05T02:47:11.964498+00:00 run_exec_ms=2311 run_id=1efe36b7-de0a-6183-8fcf-b11fe2f9ba63 run_started_at=2025-02-05T02:47:09.652809+00:00
2025-02-05 10:47:12 2025-02-05T02:47:12.093410Z [info     ] POST /threads/50ac398a-8533-4e57-818f-a30f2bc38aa3/runs/wait 200 3628ms [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=3628 method=POST path=/threads/50ac398a-8533-4e57-818f-a30f2bc38aa3/runs/wait path_params={'thread_id': '50ac398a-8533-4e57-818f-a30f2bc38aa3'} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '126', 'content-type': 'application/json'} res_header={'location': '/threads/50ac398a-8533-4e57-818f-a30f2bc38aa3/runs/1efe36b7-de0a-6183-8fcf-b11fe2f9ba63/join', 'content-type': 'application/json'} route=/threads/{thread_id}/runs/wait status=200
2025-02-05 10:47:12 2025-02-05T02:47:12.118420Z [info     ] POST /threads 200 23ms         [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=23 method=POST path=/threads path_params={} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '2', 'content-type': 'application/json'} res_header={'content-length': '220', 'content-type': 'application/json'} route=/threads status=200
2025-02-05 10:47:12 2025-02-05T02:47:12.141756Z [info     ] Created run                    [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-00e4-6dc7-b79b-749ce95dec81 thread_id=23ea25c6-3fe8-4440-9c56-15064bc28756
2025-02-05 10:47:12 2025-02-05T02:47:12.195934Z [info     ] Joined run stream              [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-00e4-6dc7-b79b-749ce95dec81 thread_id=23ea25c6-3fe8-4440-9c56-15064bc28756
2025-02-05 10:47:16 2025-02-05T02:47:16.309472Z [info     ] POST /threads/23ea25c6-3fe8-4440-9c56-15064bc28756/runs/wait 200 4188ms [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=4188 method=POST path=/threads/23ea25c6-3fe8-4440-9c56-15064bc28756/runs/wait path_params={'thread_id': '23ea25c6-3fe8-4440-9c56-15064bc28756'} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '126', 'content-type': 'application/json'} res_header={'location': '/threads/23ea25c6-3fe8-4440-9c56-15064bc28756/runs/1efe36b8-00e4-6dc7-b79b-749ce95dec81/join', 'content-type': 'application/json'} route=/threads/{thread_id}/runs/wait status=200
2025-02-05 10:47:16 2025-02-05T02:47:16.353816Z [info     ] POST /threads 200 41ms         [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=41 method=POST path=/threads path_params={} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '2', 'content-type': 'application/json'} res_header={'content-length': '220', 'content-type': 'application/json'} route=/threads status=200
2025-02-05 10:47:16 2025-02-05T02:47:16.379633Z [info     ] Created run                    [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-2949-6b8e-b238-7f583356ec8c thread_id=a98e1723-9a98-4e3b-8095-dc79a88943ea
2025-02-05 10:47:16 2025-02-05T02:47:16.430461Z [info     ] Joined run stream              [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-2949-6b8e-b238-7f583356ec8c thread_id=a98e1723-9a98-4e3b-8095-dc79a88943ea
2025-02-05 10:47:16 2025-02-05T02:47:16.816024Z [info     ] GET /ok 200 1ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=1 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:47:17 2025-02-05T02:47:17.724556Z [info     ] POST /threads/a98e1723-9a98-4e3b-8095-dc79a88943ea/runs/wait 200 1369ms [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=1369 method=POST path=/threads/a98e1723-9a98-4e3b-8095-dc79a88943ea/runs/wait path_params={'thread_id': 'a98e1723-9a98-4e3b-8095-dc79a88943ea'} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '126', 'content-type': 'application/json'} res_header={'location': '/threads/a98e1723-9a98-4e3b-8095-dc79a88943ea/runs/1efe36b8-2949-6b8e-b238-7f583356ec8c/join', 'content-type': 'application/json'} route=/threads/{thread_id}/runs/wait status=200
2025-02-05 10:47:17 2025-02-05T02:47:17.749700Z [info     ] POST /threads 200 23ms         [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=23 method=POST path=/threads path_params={} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '2', 'content-type': 'application/json'} res_header={'content-length': '220', 'content-type': 'application/json'} route=/threads status=200
2025-02-05 10:47:17 2025-02-05T02:47:17.773051Z [info     ] Created run                    [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-3698-6ff4-b36b-93809f3a801b thread_id=053ee47f-3bc3-4f21-a343-9ae8096ed387
2025-02-05 10:47:17 2025-02-05T02:47:17.829103Z [info     ] Joined run stream              [langgraph_storage.ops] api_revision=6148670 api_variant=local run_id=1efe36b8-3698-6ff4-b36b-93809f3a801b thread_id=053ee47f-3bc3-4f21-a343-9ae8096ed387
2025-02-05 10:47:19 2025-02-05T02:47:19.100664Z [info     ] POST /threads/053ee47f-3bc3-4f21-a343-9ae8096ed387/runs/wait 200 1349ms [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=1349 method=POST path=/threads/053ee47f-3bc3-4f21-a343-9ae8096ed387/runs/wait path_params={'thread_id': '053ee47f-3bc3-4f21-a343-9ae8096ed387'} proto=1.1 query_string= req_header={'host': '127.0.0.1:8123', 'user-agent': 'python-requests/2.32.3', 'accept-encoding': 'gzip, deflate', 'accept': '*/*', 'connection': 'keep-alive', 'content-length': '126', 'content-type': 'application/json'} res_header={'location': '/threads/053ee47f-3bc3-4f21-a343-9ae8096ed387/runs/1efe36b8-3698-6ff4-b36b-93809f3a801b/join', 'content-type': 'application/json'} route=/threads/{thread_id}/runs/wait status=200
2025-02-05 10:47:20 2025-02-05T02:47:20.418082Z [info     ] Worker stats                   [langgraph_api.queue] active=0 api_revision=6148670 api_variant=local available=10 max=10
2025-02-05 10:47:20 2025-02-05T02:47:20.463676Z [info     ] Queue stats                    [langgraph_api.queue] api_revision=6148670 api_variant=local med_age_secs=None min_age_secs=None n_pending=0
2025-02-05 10:47:21 2025-02-05T02:47:21.991804Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-05 10:47:27 2025-02-05T02:47:27.185765Z [info     ] GET /ok 200 0ms                [langgraph_api.server] api_revision=6148670 api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.13', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200

@vbarda
Copy link
Collaborator

vbarda commented Feb 5, 2025

could you also provide the contents of your langgraph.json file?

@Jackoder
Copy link
Author

Jackoder commented Feb 7, 2025

could you also provide the contents of your langgraph.json file?

@vbarda OK

{
  "dependencies": [
    ".",
    "./src/song/",
    "./src/react_agent/",
    "./src/subgraph/camera/",
    "./src/subgraph/scene/",
    "./src/subgraph/intention/",
    "./src/subgraph/character/",
    "./src/subgraph/story/",
    "./src/subgraph/storyboard/",
    "./src/tools/rag/"
  ],
  "graphs": {
    "song": "./src/song/graph.py:graph",
    "react_agent": "./src/react_agent/graph.py:graph",
    "storyboard": "./src/subgraph/storyboard/graph.py:graph",
    "story": "./src/subgraph/story/graph.py:graph",
    "intention": "./src/subgraph/intention/graph.py:graph",
    "character": "./src/subgraph/character/graph.py:graph",
    "scene": "./src/subgraph/scene/graph.py:graph",
    "camera": "./src/subgraph/camera/graph.py:graph",
    "rule": "./src/subgraph/rule/graph.py:graph",
    "resource": "./src/subgraph/resource/graph.py:graph",
    "self-rag": "./src/tools/rag/self-rag/graph.py:graph"
  },
  "env": "./.env"
}

@andrewnguonly
Copy link
Contributor

I'm not sure exactly what's going on, but we can try a few things to debug the issue further.

  1. Double checking: While the test script is running, the LangGraph server is not restarting, correct? If you're running the server via LangGraph Studio and changes are being made to files in the root directory of the project, LangGraph Studio may automatically reload the server.
  2. How did you get the assistant ID 0676914a-25a7-595a-a130-6f9e1ad87f7d?
  3. Is there any specific reason for not including ./src/subgraph/rule/ in the dependencies in langgraph.json?
  4. The log line "Thread run initiated with Run ID: {run_id}" is not being logged in run_thread(), which means an exception is being raised before. It looks like the exception is logging as Exception occurred: 'run_id'. Can you update the exception block handling to re-raise the exception (i.e. don't catch the exception)? Maybe the error stack trace will reveal more details.
  5. Speculating: The POST /threads/{thread_id}/runs/wait endpoint returns SSE stream events, not a typical JSON response body (e.g. application/json). This might be causing issues with the implementation of the test script. For example, all the response status codes are 200, but an exception is always raised (according to logs). I recommend trying the LangGraph Python SDK and comparing results.

@Jackoder
Copy link
Author

Jackoder commented Feb 8, 2025

I'm not sure exactly what's going on, but we can try a few things to debug the issue further.

  1. Double checking: While the test script is running, the LangGraph server is not restarting, correct? If you're running the server via LangGraph Studio and changes are being made to files in the root directory of the project, LangGraph Studio may automatically reload the server.
  2. How did you get the assistant ID 0676914a-25a7-595a-a130-6f9e1ad87f7d?
  3. Is there any specific reason for not including ./src/subgraph/rule/ in the dependencies in langgraph.json?
  4. The log line "Thread run initiated with Run ID: {run_id}" is not being logged in run_thread(), which means an exception is being raised before. It looks like the exception is logging as Exception occurred: 'run_id'. Can you update the exception block handling to re-raise the exception (i.e. don't catch the exception)? Maybe the error stack trace will reveal more details.
  5. Speculating: The POST /threads/{thread_id}/runs/wait endpoint returns SSE stream events, not a typical JSON response body (e.g. application/json). This might be causing issues with the implementation of the test script. For example, all the response status codes are 200, but an exception is always raised (according to logs). I recommend trying the LangGraph Python SDK and comparing results.
  1. I built the image and run it on docker, have checked the server didn't reload automatically
  2. The assistant ID is search from database, but it doesn't make any difference even if I change it to rule
  3. I have added it to dependencies, it didn't fix
  4. I change the script, maybe it's not relavant to the issue
  5. The client should call server API by request

I wonder if there are any issues with PostgreSQL or Redis, such as caching

@Lzzzii10
Copy link

The same error was resolved after I restarted the service, but it hasn't been completely fixed.

{
"error": {
"error": "HTTPException",
"message": "404: Graph 'chatbot' not found"
}
}

@Jackoder
Copy link
Author

After deploying Postgresql to the same network environment as the server, the problem no longer occurred.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants