Skip to content

Commit

Permalink
Fixing linear_activation_tensor dynamic quant (#622)
Browse files Browse the repository at this point in the history
Summary: dynamic quant was broken for generate due to no repr function

Test Plan: sh benchmarks.sh

20240806170037, tok/s=  9.54, mem/s=  63.14 GB/s, peak_mem= 8.61 GB, model_size= 6.62 GB quant: int8dq, mod: Llama-2-7b-chat-hf, kv_quant: False, compile: True, compile_prefill: False, dtype: torch.bfloat16, device: cuda repro: python generate.py --quantization int8dq --checkpoint_path ../../../checkpoints/meta-llama/Llama-2-7b-chat-hf/model.pth --device cuda --precision torch.bfloat16 --compile --num_samples 5 --max_new_tokens 200 --top_k 200 --temperature 0.8

Reviewers:

Subscribers:

Tasks:

Tags:
  • Loading branch information
HDCharles committed Aug 7, 2024
1 parent febeaac commit 245ab4e
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions torchao/quantization/linear_activation_quantized_tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ def __init__(
self.original_weight_tensor = original_weight_tensor
self.input_quant_func = input_quant_func

def __repr__(self):
return f"LinearActivationQuantizedTensor({self.original_weight_tensor}, {self.input_quant_func})"

def __tensor_flatten__(self):
return ["original_weight_tensor"], [self.input_quant_func]

Expand Down

0 comments on commit 245ab4e

Please sign in to comment.