-
Notifications
You must be signed in to change notification settings - Fork 362
fixes from_model function and adds tests #921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
src/lighteval/pipeline.py
Outdated
return load_model(config=model_config) | ||
if isinstance(model, TransformersModel): | ||
|
||
if isinstance(model, LightevalModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm so model > model_config? maybe we should print it somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdym ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you first check if there is a model config, then if not you work with the model - so if you provide both a model config and model, the model is completely skipped
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mhh no, I first check if model is None, if there is no model provided we use the model_config else, we check if the model is a LightevalModel, else we do from_model
.
Maybe I'm missing something in the logic here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm sorry, you're right, it's the other way around - if you provide a model you ignore the config going with
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should keep a system where we either provide the model or the config but never both, because here if you provide model + config for a LightevalModel, the config just gets ignored
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or more specifically, put the logic the other way around:
if model and config: # both exist
if transformers model:
from pretrained
else:
raise exception
elif model: # no config threfore
return model
elif config: # no model here
return model from config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah but i log a warning (line below) if the user provided a config with a lightevalModel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's clearer ! Changing :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool! I think we should just assume users will ignore warnings so we should make the code base not bugguable easily ^^"
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the from_model
function in TransformersModel
to accept only HuggingFace transformers models and requires a TransformerModelConfig
instead of multiple optional parameters. The changes streamline model initialization and improve type safety.
- Simplified
from_model
function signature and removed support forLightevalModel
instances - Updated pipeline initialization logic to handle different model/config combinations more clearly
- Added comprehensive test coverage for the new
from_model
functionality
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
src/lighteval/models/transformers/transformers_model.py | Refactored from_model method to require config parameter and only accept HuggingFace models |
src/lighteval/pipeline.py | Updated model initialization logic to handle the new from_model signature |
tests/models/test_transformers_model.py | Added new test class to verify from_model functionality with comprehensive attribute testing |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
|
||
if is_accelerate_available(): | ||
model_size, _ = calculate_maximum_sizes(self.model) | ||
model_size = convert_bytes(model_size) |
Copilot
AI
Aug 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The precision
attribute is no longer being set in the from_model
method, but it may be used elsewhere in the class. This could cause AttributeError if the attribute is accessed later.
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove self.precision from all models I think, never used (re copilot's message)
* fixes from_model function and adds tests * removes mock accelerator from tests * fixes tests by adding model_configs where needed * Apply suggestion from @Copilot Co-authored-by: Copilot <[email protected]> * fixes from review * fixes from review * fixes from review * fixes from review --------- Co-authored-by: Copilot <[email protected]>
fixes and revamps the
from_model
function.The function now can only take in a
transformers
loaded model, not a lighteval model. We must also provide a lightevalTransformerModelConfig
instead of the multiple optional args.