diff --git a/README.md b/README.md
index ce0a08591..3bd8c6020 100644
--- a/README.md
+++ b/README.md
@@ -11,11 +11,11 @@
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides over 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research such as AI for System on compilers.
-## News
+## 📣 News
- [2025-10-14] ✨ Our technical report is out: a detailed study of dataset construction and compiler benchmarking, introducing the novel performance metrics Speedup Score S(t) and Error-aware Speedup Score ES(t). [📘 GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research](./GraphNet_technical_report.pdf)
- [2025-8-20] 🚀 The second round of [open contribution tasks](https://github.com/PaddlePaddle/Paddle/issues/74773) was released. (completed ✅)
- [2025-7-30] 🚀 The first round of [open contribution tasks](https://github.com/PaddlePaddle/GraphNet/issues/44) was released. (completed ✅)
-## Benchmark Results
+## 📊 Benchmark Results
We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and TorchInductor (PyTorch), on GraphNet's NLP and CV subsets. The evaluation adopts two quantitative metrics proposed in the [Technical Report](./GraphNet_technical_report.pdf):
- **Speedup Score** S(t) — evaluates compiler performance under varying numerical tolerance levels.
@@ -28,7 +28,7 @@ We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and
-## Quick Start
+## ⚡ Quick Start
This section shows how to evaluate tensor compilers and reproduce benchmark results (for compiler users and developers),
as well as how to contribute new computation graphs (for GraphNet contributors).
@@ -97,12 +97,12 @@ python -m graph_net.plot_violin \
The scripts are designed to process a file structure as `/benchmark_path/category_name/`, and items on x-axis are identified by name of the sub-directories. After executing, several summary plots of result in categories (model tasks, libraries...) will be exported to `$GRAPH_NET_BENCHMARK_PATH`.
-## 🧱 Construction & Contribution Guide
+### 🧱 Construction & Contribution Guide
Want to understand how GraphNet is built or contribute new samples?
Check out the [Construction Guide](./docs/README_contribute.md) for details on the extraction and validation workflow.
-## Future Roadmap
+## 🚀 Future Roadmap
1. Scale GraphNet to 10K+ graphs.
2. Further annotate GraphNet samples into more granular sub-categories
@@ -136,7 +136,7 @@ GraphNet is released under the [MIT License](./LICENSE).
If you find this project helpful, please cite:
```bibtex
-@article{li2025graphnet,
+@misc{li2025graphnet,
title = {GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research},
author = {Xinqi Li and Yiqun Liu and Shan Jiang and Enrong Zheng and Huaijin Zheng and Wenhao Dai and Haodong Deng and Dianhai Yu and Yanjun Ma},
year = {2025},
diff --git a/CONTRIBUTE_TUTORIAL.md b/docs/CONTRIBUTE_TUTORIAL.md
similarity index 100%
rename from CONTRIBUTE_TUTORIAL.md
rename to docs/CONTRIBUTE_TUTORIAL.md
diff --git a/CONTRIBUTE_TUTORIAL_cn.md b/docs/CONTRIBUTE_TUTORIAL_cn.md
similarity index 100%
rename from CONTRIBUTE_TUTORIAL_cn.md
rename to docs/CONTRIBUTE_TUTORIAL_cn.md
diff --git a/docs/README_contribute.md b/docs/README_contribute.md
index 1caaa4812..a3496309d 100644
--- a/docs/README_contribute.md
+++ b/docs/README_contribute.md
@@ -65,4 +65,23 @@ python -m graph_net.torch.validate \
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name
```
-All the **construction constraints** will be examined automatically. After passing validation, a unique `graph_hash.txt` will be generated and later checked in CI procedure to avoid redundant.
\ No newline at end of file
+All the **construction constraints** will be examined automatically. After passing validation, a unique `graph_hash.txt` will be generated and later checked in CI procedure to avoid redundant.
+
+## 📁 Repository Structure
+This repository is organized as follows:
+
+| Directory | Description |
+|------------|--------------|
+| **graph_net/** | Core module for graph extraction, validation, and benchmarking |
+| **paddle_samples/** | Computation graph samples extracted from PaddlePaddle |
+| **samples/** | Computation graph samples extracted from PyTorch |
+| **docs/** | Technical documents and contributor guides|
+
+Below is the structure of the **graph_net/**:
+```text
+graph_net/
+ ├─ config/ # Config files, params
+ ├─ paddle/ # PaddlePaddle graph extraction & validation
+ ├─ torch/ # PyTorch graph extraction & validation
+ ├─ test/ # Unit tests and example scripts
+ └─ *.py # Benchmark & analysis scripts