Skip to content

Commit

Permalink
Merge pull request #1603 from madeline-underwood/PytorchBenchmark
Browse files Browse the repository at this point in the history
PytorchBenchmark_Andy to review
  • Loading branch information
jasonrandrews authored Feb 13, 2025
2 parents adbbb2b + 425f7f7 commit 12a3d00
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 22 deletions.
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---
title: Accelerate and measure PyTorch Inference on Arm servers
title: Measure and accelerate PyTorch Inference on Arm servers

minutes_to_complete: 20

who_is_this_for: This is an introductory topic for software developers who want to learn how to measure and accelerate the performance of Natural Language Processing (NLP), vision and recommender PyTorch models on Arm-based servers.

learning_objectives:
- Download and install the PyTorch Benchmarks suite.
- Evaluate the performance of PyTorch model inference running on your Arm based server using the PyTorch Benchmark suite.
- Measure the performance of these models using eager and `torch.compile` modes in PyTorch.
- Evaluate PyTorch model inference performance on an Arm-based server using the PyTorch Benchmark suite.
- Compare the model inference performance using eager mode and `torch.compile` mode in PyTorch.

prerequisites:
- An [Arm-based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider or an on-premise Arm server.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,16 @@ layout: learningpathall
---

## Before you begin
The instructions in this Learning Path are for any Arm server running Ubuntu 22.04 LTS. For this example, you need an Arm server instance with at least four cores and 8GB of RAM. The instructions have been tested on AWS Graviton3 (c7g.4xlarge) instances.
These instructions apply to any Arm server running Ubuntu 22.04 LTS. For this example, you need an Arm server instance with at least four cores and 8GB of RAM. The instructions have been tested on AWS Graviton3 (c7g.4xlarge) instances.

## Overview
PyTorch is a widely-used Machine Learning framework for Python. In this learning path, you will explore how to measure the inference time of PyTorch models running on your Arm-based server using [PyTorch Benchmarks](https://github.com/pytorch/benchmark). PyTorch Benchmarks is a collection of open-source benchmarks designed to evaluate PyTorch performance. Understanding model inference latency is crucial for optimizing machine learning applications, especially in production environments where performance can significantly impact user experience and resource utilization. You will learn how to install the PyTorch benchmark suite and compare inference performance using PyTorch's two modes of execution - eager and torch.compile modes.
PyTorch is a widely-used Machine Learning framework for Python. In this learning path, you will explore how to measure the inference time of PyTorch models running on your Arm-based server using [PyTorch Benchmarks](https://github.com/pytorch/benchmark), a collection of open-source benchmarks for evaluating PyTorch performance. Understanding inference latency is crucial for optimizing machine learning applications, especially in production environments where performance can significantly impact user experience and resource utilization.

To begin, you need to set up your environment by installing the necessary dependencies and PyTorch. Follow these steps:
You will learn how to install the PyTorch benchmark suite and compare inference performance using PyTorch's two modes of execution; eager Mode and `torch.compile` mode.

## Setup Environment
To begin, set up your environment by installing the required dependencies and PyTorch. Follow these steps:

## Set up Environment

First, install python and the required system packages:

Expand Down Expand Up @@ -46,7 +48,7 @@ git clone https://github.com/pytorch/benchmark.git
cd benchmark
git checkout 9a5e4137299741e1b6fb7aa7f5a6a853e5dd2295
```
Install the PyTorch models you would like to benchmark. Let's install a variety of NLP, computer vision and recommender models:
Install the PyTorch models you would like to benchmark. Here, let's install a variety of NLP, computer vision, and recommender models:

```bash
python3 install.py alexnet BERT_pytorch dlrm hf_Albert hf_Bart hf_Bert hf_Bert_large hf_BigBird \
Expand All @@ -55,26 +57,29 @@ resnet18 resnet50 timm_vision_transformer
```

{{% notice Note %}}
If you are using Python 3.12, the install script may fail with the following error:
If you are using Python 3.12, the install script might fail with the following error:
```output
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'.
Did you mean: 'zipimporter'
```

This may be because the `requirements.txt` installs a version of `numpy` which is not compatible with Python 3.12. To fix the issue, change the pinned `numpy` version in `requirements.txt`.
This issue can occur because `requirements.txt` installs a version of `numpy` that is incompatible with Python 3.12. To resolve it, change the pinned `numpy` version in `requirements.txt`.

```
numpy~=1.26.4
```
{{% /notice %}}

If you don't provide a model list to `install.py`, the script will download all the models included in the benchmark suite.
If you don't specify a model list for `install.py`, the script downloads all the models included in the benchmark suite.

Before running the benchmarks, configure your AWS Graviton3 instance to leverage available optimizations for improved PyTorch inference performance.

This configuration includes settings to:

Before running the benchmarks, configure your running AWS Graviton3 instance to take advantage of the optimizations available to optimize PyTorch inference performance. This includes settings to:
* Enable bfloat16 GEMM kernel support to accelerate fp32 inference.
* Set LRU cache capacity to an optimal value to avoid redundant primitive creation latency overhead.
* Enable Linux Transparent Huge Page (THP) allocations, reducing the latency for tensor memory allocation.
* Set the number of threads to use to match the number of cores on your system
* Enable Linux Transparent Huge Page (THP) allocations to reduce tensor memory allocation latency.
* Set the number of threads to use to match the number of cores on your system.

```bash
export DNNL_DEFAULT_FPMATH_MODE=BF16
Expand All @@ -83,21 +88,23 @@ export LRU_CACHE_CAPACITY=1024
export OMP_NUM_THREADS=16
```

With the environment set up and models installed, you can now run the benchmarks to measure your model inference performance.
With the environment set up and models installed, you're ready to run the benchmarks to measure your model inference performance.

Starting from PyTorch 2.0, there are 2 main execution modes - eager mode and `torch.compile` mode. The default mode of execution in PyTorch is eager mode. In this mode the operations are executed immediately as they are defined. With `torch.compile` the PyTorch code is transformed into graphs which can be executed more efficiently. This mode can offer improved model inferencing performance, especially for models with repetitive computations.
Starting with PyTorch 2.0, there are two main execution modes: eager mode and `torch.compile` mode. The default mode of execution in PyTorch is eager mode, where operations are executed immediately as they are defined. In contrast, `torch.compile` transforms PyTorch code into graphs which can be executed more efficiently. This mode can improve model inferencing performance, particularly for models with repetitive computations.

Using the scripts included in the PyTorch Benchmark suite, you will now measure the model inference latencies with both eager and torch.compile modes to compare their performance.
Using the scripts included in the PyTorch Benchmark suite, you will now measure the model inference latencies in both eager mode and `torch.compile` mode to compare their performance.

### Measure Eager Mode Performance

Run the following command to collect performance data in eager mode for the suite of models you downloaded:
Run the following command to collect performance data in eager mode for the downloaded models:

```bash
python3 run_benchmark.py cpu --model alexnet,BERT_pytorch,dlrm,hf_Albert,hf_Bart,hf_Bert,hf_Bert_large,hf_BigBird,hf_DistilBert,hf_GPT2,hf_Longformer,hf_Reformer,hf_T5,mobilenet_v2,mobilenet_v3_large,resnet152,resnet18,resnet50,timm_vision_transformer \
--test eval --metrics="latencies"
```
The results for all the models run will be stored in the `.userbenchmark/cpu/` directory. The `cpu` user benchmark creates a folder `cpu-YYmmddHHMMSS` for the test, and aggregates all test results into a JSON file `metrics-YYmmddHHMMSS.json`.`YYmmddHHMMSS` is the time you started the test. The metrics file shows the model inference latency, in milliseconds (msec) for each model you downloaded and ran. The results with eager mode should look like:
The benchmark results for all the models run are stored in the `.userbenchmark/cpu/` directory. The `cpu` user benchmark creates a timestamped folder `cpu-YYmmddHHMMSS` for each test, and aggregates all test results into a JSON file `metrics-YYmmddHHMMSS.json`, where `YYmmddHHMMSS` is the time you started the test. The metrics file shows the model inference latency, in milliseconds (msec) for each model you downloaded and ran.

The results with eager mode should appear as follows:

```output
{
Expand Down Expand Up @@ -130,7 +137,7 @@ The results for all the models run will be stored in the `.userbenchmark/cpu/` d
```
### Measure torch.compile Mode Performance

The `torch.compile` mode in PyTorch uses inductor as its default backend. For execution on the cpu, the inductor backend leverages C++/OpenMP to generate highly optimized kernels for your model. Run the following command to collect performance data in `torch.compile` mode for the suite of models you downloaded.
In PyTorch, `torch.compile` uses Inductor as its default backend. For execution on the cpu, the inductor backend leverages C++/OpenMP to generate highly optimized kernels for your model. Run the following command to collect performance data in `torch.compile` mode for the downloaded models.

```bash
python3 run_benchmark.py cpu --model alexnet,BERT_pytorch,dlrm,hf_Albert,hf_Bart,hf_Bert,hf_Bert_large,hf_BigBird,hf_DistilBert,hf_GPT2,hf_Longformer,hf_Reformer,hf_T5,mobilenet_v2,mobilenet_v3_large,resnet152,resnet18,resnet50,timm_vision_transformer \
Expand Down Expand Up @@ -168,9 +175,9 @@ The results for all the models run will be stored in the `.userbenchmark/cpu/` d
}
}
```
You will notice that most of these models show a performance improvement in model inference latency when run with the `torch.compile` model using the inductor backend.
You will notice that most of these models achieve a performance improvement in model inference latency when run with the `torch.compile` model using the inductor backend.

You have successfully run the PyTorch Benchmark suite on a variety of different models. You can experiment with the 2 different execution modes and different optimization settings, check the performance and choose the right settings for your model and use case.
You have successfully run the PyTorch Benchmark suite on a variety of different models. You can experiment with the two different execution modes and different optimization settings, check the performance, and choose the right settings for your model and use case.



0 comments on commit 12a3d00

Please sign in to comment.