-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoTVM] Fix hang/crash issues on feature extraction #3689
Conversation
cc @cbalint13 @eqy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
feature_len = None | ||
for idx in indexes: | ||
if fea_cache[idx] is not None: | ||
feature_len = fea_cache[idx].shape[-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is fea_cache[idx].shape[-1]
the same for all non-None element in fea_cache
?
Is break
missing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. A break
is missing.
All non-None elements should have the same length, otherwise it will raise error in L324
* [AutoTVM] Fix hang/crash issues on feature extraction * Update xgboost_cost_model.py * fix lint
* [AutoTVM] Fix hang/crash issues on feature extraction * Update xgboost_cost_model.py * fix lint
Currently, if a schedule is invalid and makes tvm crashes during
tvm.lower
(e.g. failed tensorization), then it will make the XGBTuner withfeature_type="itervar"
crashes or hangs.This pr fixes it and enables the tuners to tolerate these invalid schedules.