-
Notifications
You must be signed in to change notification settings - Fork 58
Major refactoring #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fxmarty
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! I think it is indeed nicer to dissociate as much as possible the backends, and to use __post_init__ when possible. Still a bit worried for the possible change of defaults (like in transformers: huggingface/transformers#25237 (comment)), but I'm not sure if it is an issue or not.
I did not review everything but left a few comments!
optimum_benchmark/experiment.py
Outdated
| experiment = OmegaConf.to_container( | ||
| experiment, structured_config_mode=SCMode.INSTANTIATE | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolve is False by default for to_container while it is True for to_object. Does this indeed resolves? Why and where?
I am not sure to remember why I put OmegaConf.create, why is it not necessary anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes true, to_object is safer and more strict, some things can go wrong unnoticed with to_container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's no benefit to using to_create because with it we just go back to DictConfig objects.
not sure why we had it there, I thought it was part of the __post_init__ patch.
|
@fxmarty I added a tag |
Re: Major refactoring #36
This refactoring's purpose is to regain separation of concerns and removing patches.
This will allow for better unittests ; testing trackers separately from backends, and generators separately from benchmarks.
Another reason for this refactoring is to standardize the interaction between
BackendConfig<->Backend,BenchmarkConfig<->Benchmark,Benchmark<->Backendand interactions within a backend (optimization, quantization, etc.)Some key points:
configare nowoptionalempty dictionaries since their usage is generally related to another control argument. With this,hydra_config.yamlwill not contain things that are not used by the benchmark script. For example, ort optimization/quantization configs,torch.compile's config, etc.pytorchusage outside ofPytorchBackendand backends that require pytorch (solves CUDA_VISIBLE_DEVICES is not captured by torch #27)pytorchbackend for consistency.__post_init__. Any other processing should not affect the types/values of the configSome benefits of this refactoring:
warmup_stepsthrough the measurement callback previously implemented for DDP.generate_configinstead ofnew_tokens,forward_configinstead ofnum_images_per_promptor any other argument that can be passed to diffusion pipelines.Only breaking change is DDP.