Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Inductor bench BC change #638

Merged
merged 4 commits into from
Aug 8, 2024
Merged

Fix Inductor bench BC change #638

merged 4 commits into from
Aug 8, 2024

Conversation

msaroufim
Copy link
Member

@msaroufim msaroufim commented Aug 8, 2024

Fixes #637

Copy link

pytorch-bot bot commented Aug 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/638

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ae3e78d with merge base 34b24f7 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 8, 2024
@msaroufim msaroufim merged commit 0a3b328 into main Aug 8, 2024
14 checks passed
@msaroufim msaroufim deleted the msaroufim/bc branch August 8, 2024 16:23
Comment on lines +22 to +30

try:
from torch._inductor.utils import do_bench
except:
from torch._inductor.runtime.runtime_utils import do_bench
except ImportError:
try:
from torch._inductor.runtime.runtime_utils import do_bench
except ImportError:
from torch._inductor.runtime.benchmarking import benchmarker
do_bench = benchmarker.benchmark
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need these imports?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AO needs to work over multiple pytorch versions and this api has been broken multiple times

We can explore getting rid of the usage completely @HDCharles

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not inline the imports in the IF/ELSE blocks later on when they get used (I see some already are?)?

from torch._inductor.runtime.runtime_utils import do_bench_gpu
res = do_bench_gpu(lambda: graph.replay(), warmup=warmup, rep=rep, return_mode="median")
elif TORCH_VERSION_AFTER_2_5 and torch.cuda.is_available():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why torch.cuda.is_available()?

from torch._inductor.runtime.runtime_utils import do_bench_gpu
res = do_bench_gpu(lambda: graph.replay(), warmup=warmup, rep=rep, return_mode="median")
elif TORCH_VERSION_AFTER_2_5 and torch.cuda.is_available():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be TORCH_VERSION_AFTER_2_4?

Comment on lines +235 to +241
if TORCH_VERSION_AFTER_2_3 and not TORCH_VERSION_AFTER_2_5:
from torch._inductor.runtime.runtime_utils import do_bench_gpu
res = do_bench_gpu(lambda: graph.replay(), warmup=warmup, rep=rep, return_mode="median")
elif TORCH_VERSION_AFTER_2_5 and torch.cuda.is_available():
from torch._inductor.runtime.benchmarking import benchmarker
do_bench_gpu = benchmarker.benchmark_gpu
res = do_bench_gpu(lambda: graph.replay(), warmup=warmup, rep=rep, return_mode="median")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newest (i.e. from torch._inductor.runtime.benchmarking import benchmarker) block needs to go first, as from my understanding TORCH_VERSION_AFTER_X_Y=True always in fbcode.

@nmacchioni
Copy link
Contributor

@msaroufim left some comments, I'm not sure if this resolves the BC changes (sorry if I'm just confused here on what is happening)

facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Aug 28, 2024
Summary:
Sync torchao installs across ExecuTorch install_requirements. torchao is used in:
- llama2
- flamingo (via torchtune)
- phi3 lora (via torchtune)

Currently:
- ExecuTorch llama2 installs torchao==0.0.1
- Torchtune installs torchao==0.3.0
- To export flamingo, which contains recent changes in torch and torchtune, we require a recent version, due to torch x torchao bc issue fixed by pytorch/ao#638

This PR adds a torchao commit hash, so that we can use a recent and consistent version of torchao in ExecuTorch.

Note that torchao does not have prebuilt macos nightlies, which is why we use a commit hash instead of nightly.


Test Plan:
Imported from GitHub, without a `Test Plan:` line.
```
./install_requirements.sh --pybind xnnpack
bash examples/models/flamingo/install_requirements.sh
python examples/models/flamingo/export_preprocess.py
```
confirm torchao==0.5.0+git0916b5b installed
confirm preprocess.pte generated

Reviewed By: larryliu0820

Differential Revision: D61925882

Pulled By: lucylq
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

inductor benchmark util BC change
4 participants