[ Back to MLPerf inference benchmarks index ]
In the edge category, ResNet50 has Offline, SingleStream, and MultiStream scenarios and in the datacenter category, it has Offline and Server scenarios.
Please check MLPerf inference GitHub for more details.
Run using the MLCommons CM framework
From Feb 2024, we suggest you to use this GUI to configure MLPerf inference benchmark, generate CM commands to run it across different implementations, models, data sets, software and hardware, and prepare your submissions.
We need to get full ImageNet dataset to make image-classification submissions for MLPerf inference. Since this dataset is not publicly available via a URL please follow the instructions given here to download the dataset and register in CM.
Install MLCommons CM automation framework with automation recipes for MLPerf as described here.
The following guides explain how to run different implementations of this benchmark via CM:
- MLCommons Reference implementation in Python
- NVIDIA optimized implementation (GPU)
- TFLite C++ implementation
Check the MLCommons Task Force on Automation and Reproducibility and get in touch via public Discord server.