Skip to content

Commit

Permalink
Fix eval import after #275
Browse files Browse the repository at this point in the history
  • Loading branch information
andrewor14 committed May 29, 2024
1 parent cbc74ee commit 38c1761
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion torchao/_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

import torch

from .utils import _lm_eval_available, _MultiInput
from quantization.utils import _lm_eval_available, _MultiInput

if _lm_eval_available:
try: # lm_eval version 0.4
Expand Down
3 changes: 2 additions & 1 deletion torchao/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,8 @@ Note: The quantization error incurred by applying int4 quantization to your mode
## A16W4 WeightOnly Quantization with GPTQ

```python
from torchao.quantization.GPTQ import Int4WeightOnlyGPTQQuantizer, InputRecorder, TransformerEvalWrapper
from torchao._eval import InputRecorder, TransformerEvalWrapper
from torchao.quantization.GPTQ import Int4WeightOnlyGPTQQuantizer
precision = torch.bfloat16
device = "cuda"
checkpoint_file_name = "../gpt-fast/checkpoints/meta-llama/Llama-2-7b-chat-hf/model.pth"
Expand Down

0 comments on commit 38c1761

Please sign in to comment.