FIX: Broken repr of TorchAoConfig#34560
Conversation
The __repr__ method references a non-existent self.kwargs. This is now fixed. There does not appear to be a uniform way of defining __repr__ for quantization configs. I copied the method as implemented for HQQ: https://github.com/huggingface/transformers/blob/e2ac16b28a0b8b900e136750309ca40c49d975c5/src/transformers/utils/quantization_config.py#L285-L287
|
@SunMarc Could you please review or suggest a reviewer? The failing test is a timeout error and unrelated to this PR. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Rocketknight1
left a comment
There was a problem hiding this comment.
I'm not actually a core maintainer, but as this is obviously a bug and the fix is clean and without side effects elsewhere, I'm happy to merge it!
| def __repr__(self): | ||
| return f"{self.quant_type}({', '.join(str(k) + '=' + str(v) for k, v in self.kwargs.items())})" | ||
| config_dict = self.to_dict() | ||
| return f"{self.__class__.__name__} {json.dumps(config_dict, indent=2, sort_keys=True)}\n" |
There was a problem hiding this comment.
Also, this is the tiniest nit ever, but do reprs normally end in \n?
There was a problem hiding this comment.
I agree that it looks unnecessary, but I copied this 1:1 from here for consistency:
transformers/src/transformers/utils/quantization_config.py
Lines 285 to 287 in e2ac16b
Not sure what the policy is, I think it's a very straightforward change and might not require a core maintainer to approve. |
|
@BenjaminBossan, yes I'm happy for you to merge it once you can rerun/rebase/etc. and get the CI green! |
Not sure if I should merge into transformers :D Let's wait if one of the cores chimes in, it's not critical. |
SunMarc
left a comment
There was a problem hiding this comment.
LGTM ! Can you have a second look @MekkCyber as you did the PR ?
|
Yes LGTM ! Thanks for the fix @BenjaminBossan |
FIX Broken repr of TorchAoConfig The __repr__ method references a non-existent self.kwargs. This is now fixed. There does not appear to be a uniform way of defining __repr__ for quantization configs. I copied the method as implemented for HQQ: https://github.com/huggingface/transformers/blob/e2ac16b28a0b8b900e136750309ca40c49d975c5/src/transformers/utils/quantization_config.py#L285-L287
What does this PR do?
The
__repr__method references a non-existentself.kwargs. This is now fixed.There does not appear to be a uniform way of defining
__repr__for quantization configs. I copied the method as implemented for HQQ:transformers/src/transformers/utils/quantization_config.py
Lines 285 to 287 in e2ac16b
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?