Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parity inference task #294

Merged
merged 10 commits into from
Sep 26, 2024
Merged

Conversation

laserkelvin
Copy link
Collaborator

This PR implements a ParityInferenceTask (with the lack of a better naming for it), which is used to run a datasplit through a model to evaluate predicted vs. actual, beyond just looking at reduced metrics like MSE/MAE. This task should be pretty straightforward to set up and run, and does not need additional logger configuration. After running trainer.predict, an inference_data.json is produced in the experiment folder (trainer.log_dir/<name>/inference_data.json) that can then be reviewed offline.

  • Implements a ParityData structure which is mostly just a helper class for accumulating data.
  • Implements the ParityInferenceTask, which in itself just provides the predict_step and on_predict_epoch_end functions to be called in the PyTorch Lightning predict pipeline.
  • Unit test that verifies functionality on three datasets using EGNN. This tests simple functionality end-to-end, up to checking that the contents of the output JSON exists at all.
  • Updated the documentation to include an "Inference" section, of which the first entry is this approach.

@laserkelvin laserkelvin added enhancement New feature or request ux User experience, quality of life changes inference Issues related to model inference and testing labels Sep 25, 2024
Copy link
Collaborator

@melo-gonzo melo-gonzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good and useful! A lot of failing tests, not sure what is relevant. Double check if these are applicable, then good to merge

@laserkelvin
Copy link
Collaborator Author

I'm pretty certain the tests that fail are the usual suspects. I did pick up something that broke everything (fixed in 2faa68d), but otherwise seems to be fine. I'm going to merge!

@laserkelvin laserkelvin merged commit f09bedc into IntelLabs:main Sep 26, 2024
2 of 5 checks passed
@laserkelvin laserkelvin deleted the parity-inference-task branch September 26, 2024 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request inference Issues related to model inference and testing ux User experience, quality of life changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants