-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change NF4Tensor dtype and add support for linear #62
Changes from 5 commits
891a1ee
4191263
2f11452
fe16b5b
49febdd
0923ff4
6ab4455
4010cf3
b3363e6
df5ad8d
d566183
292a8ff
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -59,6 +59,37 @@ def _to_copy(func, *args, **kwargs): | |
def to_dtype(func, *args, **kwargs): | ||
return args[0][0].get_original_weight().to(args[0][1]) | ||
|
||
@implements([torch.ops.aten.t.default]) | ||
# pyre-fixme[3]: Return type must be annotated. | ||
# pyre-fixme[2]: Parameter must be annotated. | ||
def t_default(func, *args, **kwargs): | ||
a = args[0][0] | ||
tensor_meta = SubclassTensorArgs( | ||
a.size(), | ||
a.stride(), | ||
a.storage_offset(), | ||
torch.bits2x4, | ||
a.device, | ||
a.requires_grad) | ||
b = NF4Tensor( | ||
tensor_meta, | ||
a.block_size, | ||
a.n_blocks, | ||
a.scaler_block_size, | ||
a.quantized_scalers, | ||
a.quantization_factor, | ||
a.scaler_mean, | ||
a.quantized_data, | ||
a.nf4, | ||
not a.transpose) | ||
return b | ||
|
||
@implements([torch.ops.aten.mm.default]) | ||
# pyre-fixme[3]: Return type must be annotated. | ||
# pyre-fixme[2]: Parameter must be annotated. | ||
def mm_default(func, *args, **kwargs): | ||
return linear_nf4(args[0][0], args[0][1]) | ||
|
||
|
||
@implements( | ||
[ | ||
|
@@ -139,6 +170,7 @@ def __new__( | |
scaler_mean: torch.Tensor, | ||
quantized_data: torch.Tensor, | ||
nf4: torch.Tensor, | ||
transpose=False, | ||
): | ||
"""Create a new NF4Tensor object | ||
Args: | ||
|
@@ -160,7 +192,7 @@ def __new__( | |
tensor_meta.original_shape, | ||
tensor_meta.original_strides, | ||
tensor_meta.storage_offset, | ||
dtype=tensor_meta.dtype, | ||
dtype=torch.bits2x4, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for provenance I still dont like this 🙃 I think that nf4tensor's outer wrapper subclass should have the same dtype as the type that it was created from. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree. We need a better extensibility story for dtypes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah I think we want to deprecate these, why not use torch.uint2? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nf4 is a 4bit type. I suppose another mitigation is a type guard at torch_dispatch level and using torch.bits8 just so the allocator will always spit out bytes (not like it has a choice). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. torch.bits2x4 means 8 bit though, these dtypes (including bits1x8, bits4x2) should be removed actually, since torch.bits8 means the same thing because the meaning is uninterpreted dtypes so what are you trying to express here? 2 bits * 2 that packed into a 4 bit? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this sounds like a uint4 tensor with a different packing format, can you reuse uint4 Tensor as the underlying dtype (by inheriting from UInt4Tensor probably)? can you write down all the use cases for nf4 dtype as well so we get some idea of how we can support it? bits8 is generally not recommended right now either btw, since all these bit shifting ops etc. are already available in uint8 so we'd recommend uint8 if you want a 8 bit dtype. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I agree. Having this represent the high precision dtype has worked well for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah Uint4Tensor is the same as NF4Tensor, AFAIK I think ed copied the packing format from NFTensor in nuggets and that was the basis of uint4tensor. Nf4Tensor was copied over to ao and not inherited for speed of enabling torchtune. But I agree that NF4 should like inherit from uint4 That being said this same outer tensor dtype issue applies the same for the uint4tensor same as it does this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So Float8Tensor's dtype is bfloat16? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes. For float8 tensor specifically, this is required, because we need to trick autograd's |
||
device=tensor_meta.device, | ||
requires_grad=tensor_meta.requires_grad, | ||
) | ||
|
@@ -178,6 +210,7 @@ def __init__( | |
scaler_mean: torch.Tensor, | ||
quantized_data: torch.Tensor, | ||
nf4: torch.Tensor, | ||
transpose=False, | ||
): | ||
"""Initialize the NF4Tensor class""" | ||
self.block_size = block_size | ||
|
@@ -188,6 +221,7 @@ def __init__( | |
self.scaler_mean = scaler_mean | ||
self.quantized_data = quantized_data | ||
self.nf4 = nf4 | ||
self.transpose = transpose | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If we are doing the lazy transpose you need to update the unlatten and flatten methods There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, maybe I'll put this into the strides instead and rely on |
||
|
||
@classmethod | ||
@torch.no_grad() | ||
|
@@ -428,7 +462,7 @@ def quantize_tensor_nearest( | |
# pyre-fixme[40]: Static method `dequantize` cannot override a non-static method | ||
# defined in `torch._C.TensorBase`. | ||
def dequantize(value: torch.Tensor, nf4: torch.Tensor) -> torch.Tensor: | ||
"""Dequantize a nf4 value to float16 format""" | ||
"""Dequantize a nf4 value to bfloat16 format""" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is nf4 tensor still restricted to bf16 only for the higher precision, are there any blockers in supporting fp32? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should be able to support arbitrary precision for conversion, but of course the fidelity of nf4 is independen of the dtype that was passed during construction. |
||
# return nf4.index_select(0, value) | ||
return nf4[value] | ||
|
||
|
@@ -546,7 +580,7 @@ class LinearNF4(torch.autograd.Function): | |
def forward(ctx, input: torch.Tensor, weight: NF4Tensor): | ||
"""Save the quantized nf4 weight for backward pass""" | ||
ctx.nf4_weight = weight | ||
return F.linear(input, weight.get_original_weight()) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @drisspg - So we used to dequantize for each linear call? I guess that makes sense since it's essentially weight only quant. |
||
return F.linear(input, weight.to(input.dtype)) | ||
|
||
@staticmethod | ||
# pyre-fixme[14]: `backward` overrides method defined in `_SingleLevelFunction` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also check the dtype of
inpt_tensor_nf4.to(torch.bfloat16)
?