
π€ HuggingFace ο½
ModelScope
We present Tongyi DeepResearch, an agentic large language model featuring 30.5 billion total parameters, with only 3.3 billion activated per token. Developed by Tongyi Lab, the model is specifically designed for long-horizon, deep information-seeking tasks. Tongyi DeepResearch demonstrates state-of-the-art performance across a range of agentic search benchmarks, including Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA,xbench-DeepSearch, FRAMES and SimpleQA.
Tongyi DeepResearch builds upon our previous work on the
WebAgent project.
More details can be found in our π°Β Tech Blog.
- βοΈ Fully automated synthetic data generation pipeline: We design a highly scalable data synthesis pipeline, which is fully automatic and empowers agentic pre-training, supervised fine-tuning, and reinforcement learning.
- π Large-scale continual pre-training on agentic data: Leveraging diverse, high-quality agentic interaction data to extend model capabilities, maintain freshness, and strengthen reasoning performance.
- π End-to-end reinforcement learning: We employ a strictly on-policy RL approach based on a customized Group Relative Policy Optimization framework, with token-level policy gradients, leave-one-out advantage estimation, and selective filtering of negative samples to stabilize training in a nonβstationary environment.
- π€ Agent Inference Paradigm Compatibility: At inference, Tongyi DeepResearch is compatible with two inference paradigms: ReAct, for rigorously evaluating the model's core intrinsic abilities, and an IterResearch-based 'Heavy' mode, which uses a test-time scaling strategy to unlock the model's maximum performance ceiling.
You can directly download the model by following the links below.
Model | Download Links | Model Size | Context Length |
---|---|---|---|
Tongyi-DeepResearch-30B-A3B | π€ HuggingFace π€ ModelScope |
30B-A3B | 128K |
[2025/09/17]π₯ We have released Tongyi-DeepResearch-30B-A3B.
This guide provides instructions for setting up the environment and running inference scripts located in the inference folder.
- Recommended Python version: 3.10.0 (using other versions may cause dependency issues).
- It is strongly advised to create an isolated environment using
conda
orvirtualenv
.
# Example with Conda
conda create -n react_infer_env python=3.10.0
conda activate react_infer_env
Install the required dependencies:
pip install -r requirements.txt
Configure your API keys and settings by copying the example environment file:
# Copy the example environment file
cp .env.example .env
Edit the .env
file and provide your actual API keys and configuration values:
- SERPER_KEY_ID: Get your key from Serper.dev for web search and Google Scholar
- JINA_API_KEYS: Get your key from Jina.ai for web page reading
- API_KEY/API_BASE: OpenAI-compatible API for page summarization from OpenAI
- DASHSCOPE_API_KEY: Get your key from Dashscope for file parsing
- SANDBOX_FUSION_ENDPOINT: Python interpreter sandbox endpoints (see SandboxFusion)
- MODEL_PATH: Path to your model weights
- DATASET: Name of your evaluation dataset
- OUTPUT_PATH: Directory for saving results
Note: The
.env
file is gitignored, so your secrets will not be committed to the repository.
The system supports two input file formats: JSON and JSONL.
Option 1: JSONL Format (recommended)
- Create your data file with
.jsonl
extension (e.g.,my_questions.jsonl
) - Each line must be a valid JSON object with
question
andanswer
keys:{"question": "What is the capital of France?", "answer": "Paris"} {"question": "Explain quantum computing", "answer": ""}
Option 2: JSON Format
- Create your data file with
.json
extension (e.g.,my_questions.json
) - File must contain a JSON array of objects, each with
question
andanswer
keys:[ {"question": "What is the capital of France?", "answer": "Paris"}, {"question": "Explain quantum computing", "answer": ""} ]
Important Note: The answer
field contains the ground truth/reference answer used for evaluation. The system generates its own responses to the questions, and these reference answers are used to automatically judge the quality of the generated responses during benchmark evaluation.
- If using the file parser tool, prepend the filename to the
question
field - Place referenced files in
eval_data/file_corpus/
directory - Example:
{"question": "report.pdf What are the key findings?", "answer": "..."}
project_root/
βββ eval_data/
β βββ my_questions.jsonl # Your evaluation data
β βββ file_corpus/ # Referenced documents
β βββ report.pdf
β βββ data.xlsx
- Open
run_react_infer.sh
and modify the following variables as instructed in the comments:MODEL_PATH
- path to the local or remote model weights.DATASET
- full path to your evaluation file, e.g.eval_data/my_questions.jsonl
or/path/to/my_questions.json
.OUTPUT_PATH
- path for saving the prediction results, e.g../outputs
.
- Depending on the tools you enable (retrieval, calculator, web search, etc.), provide the required
API_KEY
,BASE_URL
, or other credentials. Each key is explained inline in the bash script.
bash run_react_infer.sh
With these steps, you can fully prepare the environment, configure the dataset, and run the model. For more details, consult the inline comments in each script or open an issue.
Tongyi-DeepResearch-30B-A3B is now available at OpenRouter. You can run the inference without any GPUs.
You need to modify the following in the file inference/react_agent.py:
- In the call_server function: Set the API key and URL to your OpenRouter accountβs API and URL.
- Change the model name to alibaba/tongyi-deepresearch-30b-a3b.
- Adjust the content concatenation way as described in the comments on lines 88β90.
We provide benchmark evaluation scripts for various datasets. Please refer to the evaluation scripts directory for more details.
Tongyi DeepResearch also has an extensive deep research agent family. You can find more information in the following paper:
[1] WebWalker: Benchmarking LLMs in Web Traversal (ACL 2025)
[2] WebDancer: Towards Autonomous Information Seeking Agency (NeurIPS 2025)
[3] WebSailor: Navigating Super-human Reasoning for Web Agent
[4] WebShaper: Agentically Data Synthesizing via Information-Seeking Formalization
[5] WebWatcher: Breaking New Frontier of Vision-Language Deep Research Agent
[6] WebResearcher: Unleashing unbounded reasoning capability in Long-Horizon Agents
[7] ReSum: Unlocking Long-Horizon Search Intelligence via Context Summarization
[8] WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep Research
[9] WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic Data and Scalable Reinforcement Learning
[10] Scaling Agents via Continual Pre-training
[11] Towards General Agentic Intelligence via Environment Scaling
π₯π₯π₯ We are hiring! Research intern positions are open (based in HangzhouγBeijingγShanghai)
π Research AreaοΌWeb Agent, Search Agent, Agent RL, MultiAgent RL, Agentic RAG
βοΈ ContactοΌ[email protected]
For communications, please contact Yong Jiang ([email protected]).
@misc{tongyidr,
author={Tongyi DeepResearch Team},
title={Tongyi-DeepResearch},
year={2025},
howpublished={\url{https://github.com/Alibaba-NLP/DeepResearch}}
}