-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flag to use CPU on TEDPolicy train #10944
Conversation
Bumps [google-github-actions/setup-gcloud](https://github.com/google-github-actions/setup-gcloud) from 0.4.0 to 0.5.1. - [Release notes](https://github.com/google-github-actions/setup-gcloud/releases) - [Changelog](https://github.com/google-github-actions/setup-gcloud/blob/master/CHANGELOG.md) - [Commits](google-github-actions/setup-gcloud@e0f83f2...04141d8) --- updated-dependencies: - dependency-name: google-github-actions/setup-gcloud dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [actions/github-script](https://github.com/actions/github-script) from 4.0.2 to 6. - [Release notes](https://github.com/actions/github-script/releases) - [Commits](actions/github-script@a3e7071...9ac0880) --- updated-dependencies: - dependency-name: actions/github-script dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [pytest-timeout](https://github.com/pytest-dev/pytest-timeout) from 1.4.2 to 2.1.0. - [Release notes](https://github.com/pytest-dev/pytest-timeout/releases) - [Commits](pytest-dev/pytest-timeout@1.4.2...2.1.0) --- updated-dependencies: - dependency-name: pytest-timeout dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]>
…e-github-actions-setup-gcloud-0.5.1 Bump google-github-actions/setup-gcloud from 0.4.0 to 0.5.1
….1.0 Bump pytest-timeout from 1.4.2 to 2.1.0
…ns-github-script-6 Bump actions/github-script from 4.0.2 to 6
…tions-actions-github-script-6 Revert "Bump actions/github-script from 4.0.2 to 6"
…tions-google-github-actions-setup-gcloud-0.5.1 Revert "Bump google-github-actions/setup-gcloud from 0.4.0 to 0.5.1"
Co-authored-by: Emily <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll leave it to @kedz to provide a more thorough review, just noted two things to address for now
@SamuelNoB Could you please sign the contributor license agreement in the meantime? |
Co-authored-by: Emily <[email protected]>
Co-authored-by: Emily <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick fixes. I have made some more suggestions + a couple of things that are pending:
-
We need to make sure that inference also runs on the requested device. In order to do that, we can wrap the model loading call with the appropriate context manager as you have done in
train
. The value ofUSE_GPU
would have been persisted during training and will be loaded correctly in theconfig
variable (line 1067), so you can useconfig
to fetch the value ofUSE_GPU
. -
Please add a description of
use_gpu
to the table of parameters of TEDPolicy and UnexpecTEDIntentPolicy -
Please add a changelog describing the change made in this PR to the changelog folder. Please refer to the README inside that folder for instructions on the content of the changelog file.
Co-authored-by: Emily <[email protected]>
I'm not sure if i know how to make this first request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following still needs to be implemented to make sure that we also switch to CPU when loading and finetuning an existing model:
We need to make sure that inference also runs on the requested device. In order to do that, we can wrap the model loading call with the appropriate context manager as you have done in train. The value of USE_GPU would have been persisted during training and will be loaded correctly in the config variable (line 1067), so you can use config to fetch the value of USE_GPU.
Some more details on that:
- If we want to use the CPU for finetuning, then we need to wrap the
model = cls._load_tf_model(...)
call that Daksh referenced above into awith tf.device(...)
as well (because that call will call the RasaModel load function which contains the compile that builds the graph for the loaded model and which will put it on the GPU if not told otherwise) - Since that call happens in a class function, we don't have access to
self.config
as before and need to get the information whether we want to use the GPU or not from somewhere else. We can get the same information from theconfig
object that is created a few lines above themodel = cls._load_tf_model(...)
call.
Let me know if this helps or if you have more questions
Co-authored-by: Emily <[email protected]>
@WashingtonBispo Can you please merge |
Now it is fixed |
Hi @WashingtonBispo @emysdias Added a couple of final suggestions + the code quality check is failing above. Can you please address these? |
Co-authored-by: WashingtonBispo <[email protected]>
done |
Changes needed still |
ops, I'm going to change |
Co-authored-by: WashingtonBispo <[email protected]>
Running final model regression tests on this PR |
hi @emysdias and @WashingtonBispo , all tests seem to be passing now. Can you please merge the base branch once more and then I can enable auto-merge 🎉 |
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Thanks a lot for your contribution ✨
Thank you a lot |
Thank you sir. |
Proposed changes:
Status (please check what you already did):
black
(please check Readme for instructions)