This repository provides scripts and instructions to replicate the experiments from our paper, m4: A Learned Flow-level Network Simulator. It includes all necessary tools to reproduce the experimental results documented in Sections 5.2 and 5.3 of the paper.
- Quick Reproduction
- Setup and Installation
- Running Experiments from Scratch
- Training Your Own Model
- Repository Structure
- Citation
- Acknowledgments
- Contact
To quickly reproduce the results in the paper, follow these steps:
-
Clone the repository and initialize submodules:
git clone https://github.com/netiken/m4.git cd m4 git submodule update --init --recursive
-
Run the evaluation script to replicate results from Sections 5.2 and 5.4:
jupyter notebook plot_eval.ipynb
Ensure you have the following installed:
- Python 3
- Rust & Cargo
- gcc-9
-
Set up Python environment:
conda env create -f environment.yml conda activate m4
-
Install Rust and Cargo:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh rustup install nightly rustup default nightly
-
Install gcc-9:
sudo apt-get install gcc-9 g++-9
-
Set up ns-3 for data generation:
cd High-Precision-Congestion-Control/ns-3.39 ./configure
The pre-trained checkpoints for the full m4 pipeline are available in the XXX
directory. You can use them directly or train your own model (see Training Your Own Model).
cd parsimon-eval/expts/fig_7
cargo run --release -- --root=./data --mix spec/0.mix.json ns3
cargo run --release -- --root=./data --mix spec/1.mix.json ns3
cargo run --release -- --root=./data --mix spec/2.mix.json ns3
cargo run --release -- --root=./data --mix spec/0.mix.json mlsys
cargo run --release -- --root=./data --mix spec/1.mix.json mlsys
cargo run --release -- --root=./data --mix spec/2.mix.json mlsys
Then, visualize the results using:
jupyter notebook plot_eval.ipynb
cd parsimon-eval/expts/fig_8
cargo run --release -- --root=./eval_test --mixes spec/eval_test.mix.json --NR_FLOWS 20000 ns3
cargo run --release -- --root=./eval_test --mixes spec/eval_test.mix.json --NR_FLOWS 20000 mlsys
Then, visualize the results using:
jupyter notebook plot_eval.ipynb
cd parsimon-eval/expts/fig_8
cargo run --release -- --root=./eval_app --mixes spec/eval_app.mix.json --NR_FLOWS 20000 ns3
cargo run --release -- --root=./eval_app --mixes spec/eval_app.mix.json --NR_FLOWS 20000 mlsys
Then, visualize the results using:
jupyter notebook plot_eval.ipynb
To train a new model, follow these steps:
-
Generate training data:
cd parsimon-eval/expts/fig_8 cargo run --release -- --root={dir_to_data} --mixes={config_for_sim_scenarios} ns3
Example:
cargo run --release -- --root=./eval_train --mixes spec/eval_train.mix.json --NR_FLOWS 2000 ns3
-
Train the model:
- Ensure you are in the correct Python environment.
- Modify
config/train_config_lstm_topo.yaml
if needed. - Run:
cd m4 python main_train.py --train_config={path_to_config_file} --mode=train --dir_input={dir_to_save_data} --dir_output={dir_to_save_ckpts} --note={note}
Example:
python main_train.py --train_config=./config/train_config_lstm_topo.yaml --mode=train --dir_input=./parsimon-eval/expts/fig_8/eval_train --dir_output=/data2/lichenni/output_perflow --note m4
├── config # Configuration files for training and testing m4
├── High-Precision-Congestion-Control # HPCC repository for data generation
├── parsimon-eval # Scripts to reproduce m4 experiments and comparisons
├── util # Utility functions for m4, including data loaders and ML model implementations
└── main_train.py # Main script for training and testing m4
If you find our work useful, please cite our paper:
@inproceedings{m4,
author = {Li, Chenning and Zabreyko, Anton and Nasr-Esfahany, Arash and Zhao, Kevin and Goyal, Prateesh and Alizadeh, Mohammad and Anderson, Thomas},
title = {m4: A Learned Flow-level Network Simulator},
year = {2025},
}
We extend special thanks to Kevin Zhao and Thomas Anderson for their insights in the NSDI'23 paper Scalable Tail Latency Estimation for Data Center Networks. Their source code is available in Parsimon.
For further inquiries, reach out to Chenning Li at:
📧 [email protected]