-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solved a problem similar to Exception: Reached maximum number of idle transformation calls #130
base: master
Are you sure you want to change the base?
Conversation
LemonCANDY42
commented
Mar 18, 2023
•
edited
Loading
edited
- According to the solution of @jaheba in Exception: Reached maximum number of idle transformation calls awslabs/gluonts#2694, an optional parameter max_idle_transforms is added to TimeGradEstimator. I guess it can solve problems like Fixes Exception: Reached maximum number of idle transformation calls #127 & Fixes Exception: Reached maximum number of idle transformation calls. #117.
- Fixed module 'numpy' has no attribute 'long' problem.
- I found that after gluonts V0.10.X, in this commit awslabs/gluonts@4126386, the freq parameter has been removed, so I modified this part to avoid the error: TypeError: PyTorchPredictor .init() got an unexpected keyword argument 'freq'. this Fixes TypeError: __init__() got an unexpected keyword argument 'freq' #118.
- According to the problem fixed above, the version of gluonts used is 0.12.4.
But for the third point, can someone tell me why the freq parameter was removed in gluonts?Any clarification would be greatly appreciated |
@LemonCANDY42 Thanks for that. Wish I had seen this PR before. I did 2/3 of the fixes myself (plus another one about the dataset) and was going to open a PR. Could the authors please merge this? @kashif |
@stathius ok let me check... do we need to change the notebook? |
@@ -135,7 +135,7 @@ def create_transformation(self) -> Transformation: | |||
AsNumpyArray( | |||
field=FieldName.FEAT_STATIC_CAT, | |||
expected_ndim=1, | |||
dtype=np.long, | |||
dtype=np.int_, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype=np.int_, | |
dtype=int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stathius ok let me check... do we need to change the notebook?
The notebook seems to run fine.
i am also fixing things up in the |
) -> None: | ||
super().__init__(lead_time=lead_time) | ||
self.trainer = trainer | ||
self.dtype = dtype | ||
self.max_idle_transforms = kwargs["max_idle_transforms"] if "max_idle_transforms" in kwargs else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is by no means wrong but it seems to me that newer versions of gluonts seem to handle this using the env
variable. If so might be better to stick with it for better compatibility. @kashif
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok yes if you can peek into the 0.7.0 branch, you can also see i have merged the implementation of deepAR
and deepVAR
as they differ in the output side and the vanilla transformer also works for both univariate and multivariate...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for pointing me to the 0.7.0 branch, really good to know you're activelty working on this. Will have a more thorough look. I realize you're now using the pytorch-lightning trainer (was entertaining doing that).
when I try dpvar,I have the same problem:Reached maximum number of idle transformation calls. |
I think the best way to resolve the issue of "Reached maximum number of idle transformation calls" is to provide a larger number to "num_instances" of ExpectedNumInstanceSampler. At the moment for this example in DeepAREstimator definition, this was fixed to num_instances = 1.0. I think it is better to allow users to provide an appropriate value to this parameters in e.g. DeepAREstimator. When this parameter is small like 1.0 while the time series to very long (say 13K), then the probability to get samples will be 1/13K. |