Skip to content

Conversation

@jingyanwangms
Copy link
Contributor

Description:
symbolic_helper._training_mode no longer exists, according to https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_helper.py#L62, the new api is symbolic_helper.check_training_mode
Motivation and Context
Gives module 'torch.onnx.symbolic_helper' has no attribute '_training_mode' right now

)
inplace = kwargs["inplace"]
training_mode = symbolic_helper._training_mode
training_mode = symbolic_helper.check_training_mode
Copy link
Contributor

@justinchuby justinchuby Jun 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_training_mode is a private field not exposed to users. Do you need access to the current training mode during export?

(check_training_mode is a function which returns None. It doesn't seem fitting to be used here)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here, does check_training_mode return the training mode originally returned by _training_mode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need access to the current training mode during export?
I think so, how to access current training mode? I can give that a try

Copy link
Contributor

@ytaous ytaous Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 - it's blocking ROCm test - it's also using torch 1.12 at the moment. What's the right workaround?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_training_mode is a private field not exposed to users. Do you need access to the current training mode during export?

(check_training_mode is a function which returns None. It doesn't seem fitting to be used here)

One alternative is simply convert it to public/property :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to be cautious about exposing internal states. Once we do that, changes will be very hard. I am hoping we remove more globals in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@justinchuby @ytaous
FYI it's torch.onnx._globals.GLOBALS.training_mode not torch.onnx._globals.GLOBAL.training_mode
I'll update this PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch sorry about that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries. The PR is updated now

inplace = kwargs["inplace"]
training_mode = symbolic_helper._training_mode
# TODO move to public API once exporter team exposes that
training_mode = _globals.GLOBALS.training_mode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just confirm, is this compatible with older version PyTorch? If not, should we add torch version check here?

Copy link
Contributor

@justinchuby justinchuby Jun 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is recently introduced. I would just extract it into a function (something like get_exporter_training_mode) and do a try catch there.

@baijumeswani baijumeswani force-pushed the jingywa/fix-symbolic-helper-api branch from 95948fb to 9d3ce75 Compare July 12, 2022 19:48
@lgtm-com
Copy link

lgtm-com bot commented Jul 12, 2022

This pull request introduces 1 alert and fixes 1 when merging 9d3ce75 into 6e05101 - view on LGTM.com

new alerts:

  • 1 for Unused import

fixed alerts:

  • 1 for Unused import

else:
from torch.onnx import _globals

training_mode = _globals.GLOBALS.training_mode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per discussion, let's use 1.11, and swap the logic, thx

@baijumeswani baijumeswani force-pushed the jingywa/fix-symbolic-helper-api branch from 9d3ce75 to f707a07 Compare July 12, 2022 21:06
@lgtm-com
Copy link

lgtm-com bot commented Jul 12, 2022

This pull request introduces 1 alert and fixes 1 when merging f707a07 into a6fd1a3 - view on LGTM.com

new alerts:

  • 1 for Unused import

fixed alerts:

  • 1 for Unused import

@titaiwangms
Copy link
Contributor

Following up on this.

@baijumeswani baijumeswani merged commit a9d0d33 into master Jul 13, 2022
@baijumeswani baijumeswani deleted the jingywa/fix-symbolic-helper-api branch July 13, 2022 00:26
RandySheriffH pushed a commit that referenced this pull request Jul 18, 2022
Co-authored-by: Jingyan Wang, Baiju Meswani
RandySheriffH added a commit that referenced this pull request Jul 19, 2022
* support optimizer opt for deepspeed 0.5.9

* resolve comments

* resolve comments

* FP16_Optimizer Support for more Deepspeed Versions (#12046)

* fp16_optimizer for more ds versions

* change ds version

* bugfix

* fix bug

* Fix unused function warning for decodeMIDR(). (#12069)

Changed from static function defined in header to function declared in header and defined in separate .cc file.

* pin protobuf version to be compatible with onnx (#12132)

Co-authored-by: Ashwini Khade <[email protected]@orttrainingdev10.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>

* RoiAlign CPU EP add warning for max mode with samples != 1 (#12136)

* RoiAlign add warning about incorrect max summation when sample size not 1

* include coreml_provider_factory.h in macos build instead of coreml_ex… (#12138)

include coreml_provider_factory.h in macos build instead of coreml_execution_provider.h

* List 3.10 as supported python version and remove 3.6 (#12141)

list 3.10 as supported python version and remove 3.6

Co-authored-by: Randy Shuai <[email protected]>

* Use updated symbolic_helper.check_training_mode (#11900)

Co-authored-by: Jingyan Wang, Baiju Meswani

* Fix GH issue 12151 by using inverse perms for updating DQ axis attribute (#12158)

* Fix GH issue 12151.

Need to use inverse perms for updating that axis to what is used for transposing the input. This only applies if the DQ node is doing per-axis dequantization.

* fixing positions for beam search gpt2 (#12156)

* fixing positions for beam search gpt2
Co-authored-by: Tianlei Wu <[email protected]>

* remove wrong placed libs (#12201)

* Add file mapping for windows platform. (#12183)

* Add file mapping for windows platform.

* Add unit test for file mapping for windows. Also add an error message for mis-aligned offset

* Add unit test for file mapping for windows. Also add an error message for mis-aligned offset

* Update data type to avoid warnings

* Compitable data type to avoid warnings. Update CreatFileMapping2 condition for winml compiling.

* Add type conversion to avoid warnings for X86 release build.

Co-authored-by: Ting Cao <[email protected]>

* Fix bug where onnxruntime_USE_NCCL flag would default to ON (#12195)

Fix bug where onnxruntime_USE_NCCL flag would default to ON, causing ORT to not build properly. New functionality: flag is ON when training is enabled and NCCL is not disabled. Flag is OFF otherwise

Co-authored-by: zhijxu <[email protected]>
Co-authored-by: zhijxu <zhijxu>
Co-authored-by: Vincent Wang <[email protected]>
Co-authored-by: Edward Chen <[email protected]>
Co-authored-by: Ashwini Khade <[email protected]>
Co-authored-by: Ashwini Khade <[email protected]@orttrainingdev10.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Dwayne Robinson <[email protected]>
Co-authored-by: Carson Swope <[email protected]>
Co-authored-by: Randy Shuai <[email protected]>
Co-authored-by: jingyanwangms <[email protected]>
Co-authored-by: Scott McKay <[email protected]>
Co-authored-by: Viswanath Boga <[email protected]>
Co-authored-by: leqiao-1 <[email protected]>
Co-authored-by: caoting-dotcom <[email protected]>
Co-authored-by: Ting Cao <[email protected]>
Co-authored-by: Sean Murray <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants