Skip to content

FIX: Broken repr of TorchAoConfig#34560

Merged
LysandreJik merged 1 commit intohuggingface:mainfrom
BenjaminBossan:fix-torchaoconfig-repr
Nov 5, 2024
Merged

FIX: Broken repr of TorchAoConfig#34560
LysandreJik merged 1 commit intohuggingface:mainfrom
BenjaminBossan:fix-torchaoconfig-repr

Conversation

@BenjaminBossan
Copy link
Member

What does this PR do?

The __repr__ method references a non-existent self.kwargs. This is now fixed.

There does not appear to be a uniform way of defining __repr__ for quantization configs. I copied the method as implemented for HQQ:

def __repr__(self):
config_dict = self.to_dict()
return f"{self.__class__.__name__} {json.dumps(config_dict, indent=2, sort_keys=True)}\n"

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

The __repr__ method references a non-existent self.kwargs. This is now
fixed.

There does not appear to be a uniform way of defining __repr__ for
quantization configs. I copied the method as implemented for HQQ:

https://github.com/huggingface/transformers/blob/e2ac16b28a0b8b900e136750309ca40c49d975c5/src/transformers/utils/quantization_config.py#L285-L287
@BenjaminBossan
Copy link
Member Author

BenjaminBossan commented Nov 1, 2024

@SunMarc Could you please review or suggest a reviewer?

The failing test is a timeout error and unrelated to this PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not actually a core maintainer, but as this is obviously a bug and the fix is clean and without side effects elsewhere, I'm happy to merge it!

def __repr__(self):
return f"{self.quant_type}({', '.join(str(k) + '=' + str(v) for k, v in self.kwargs.items())})"
config_dict = self.to_dict()
return f"{self.__class__.__name__} {json.dumps(config_dict, indent=2, sort_keys=True)}\n"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this is the tiniest nit ever, but do reprs normally end in \n?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it looks unnecessary, but I copied this 1:1 from here for consistency:

def __repr__(self):
config_dict = self.to_dict()
return f"{self.__class__.__name__} {json.dumps(config_dict, indent=2, sort_keys=True)}\n"

@BenjaminBossan
Copy link
Member Author

I'm not actually a core maintainer, but as this is obviously a bug and the fix is clean and without side effects elsewhere, I'm happy to merge it!

Not sure what the policy is, I think it's a very straightforward change and might not require a core maintainer to approve.

@Rocketknight1
Copy link
Member

@BenjaminBossan, yes I'm happy for you to merge it once you can rerun/rebase/etc. and get the CI green!

@BenjaminBossan
Copy link
Member Author

yes I'm happy for you to merge it once you can rerun/rebase/etc. and get the CI green!

Not sure if I should merge into transformers :D Let's wait if one of the cores chimes in, it's not critical.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ! Can you have a second look @MekkCyber as you did the PR ?

@SunMarc SunMarc requested a review from MekkCyber November 4, 2024 17:22
@MekkCyber
Copy link
Contributor

Yes LGTM ! Thanks for the fix @BenjaminBossan

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks all for reviews!

@LysandreJik LysandreJik merged commit 5e1fd4e into huggingface:main Nov 5, 2024
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
FIX Broken repr of TorchAoConfig

The __repr__ method references a non-existent self.kwargs. This is now
fixed.

There does not appear to be a uniform way of defining __repr__ for
quantization configs. I copied the method as implemented for HQQ:

https://github.com/huggingface/transformers/blob/e2ac16b28a0b8b900e136750309ca40c49d975c5/src/transformers/utils/quantization_config.py#L285-L287
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants