-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MetricsCB assumes n_inp=1 #15
Comments
Also let me know if there is a better spot to discuss topics like this... |
Probably pinging Jeremy on discord make sense. I like the idea of putting n_inp on learn next to the batch as it can be used in other places as well. My mixup implementation assumes right now batch has only 2 elements. |
I agree. Here's my solution , change it to "*x, y = to_cpu(learn.batch)", and "self.loss.update(to_cpu(learn.loss), weight=len(y))" |
I'm playing around with using miniai on different types of tasks. Currently, a siamese network (same/not same) with two input images (and will probably be trying out object detection next which will require multiple y values).
I'm using TrainCB, which allows you to specify n_inp=2. But no support for n_inp > 1 in MetricsCB. I'd be happy to do a pull request but wanted to bounce a few ideas around.
Option 1) just push n_inp into learner and MetricsCB could be modified to honor that
Option 2) add get_x and get_y methods to Learner; TrainCB could patch those (before_fit) for n_inp>1; and MetricsCB would just do *learn.get_y() to evaluate loss
option 1 is probably a reasonable quick workaround for now.
option 2 is a few more changes to the core learner, but probably better?
Thoughts?
Thanks
John
The text was updated successfully, but these errors were encountered: