-
Notifications
You must be signed in to change notification settings - Fork 212
[PoC] Add MetaLearning support through learn2learn #737
Conversation
Codecov Report
@@ Coverage Diff @@
## master #737 +/- ##
==========================================
- Coverage 90.21% 89.77% -0.44%
==========================================
Files 199 201 +2
Lines 10688 11011 +323
==========================================
+ Hits 9642 9885 +243
- Misses 1046 1126 +80
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have a task, real example, competition to test it on?
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good 😃
def __init__(self, task: AdapterTask, backbone: torch.nn.Module, head: torch.nn.Module): | ||
super().__init__() | ||
|
||
self._task = NoModule(task) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we just move out the base *_step
implementations to a mixin to just inherit the behaviour rather than have a reference to the task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure to understand you. I did it this way, so we can use Task.training_step(self.task, batch, batch_idx)
in the training_step of the adapter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but we could just move the Task.*_step methods out somehwere so that they can be included in the base task but also in adapters. That way the base adapter and base task could both have the base step methods and you could just do super().*_step
to get access to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neat, looking great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 😃
What does this PR do?
Here is the PR from the learn2learn side: learnables/learn2learn#257
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃