You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A simple solution is to make the average score available to the Python API for some cases the user want to know its value, but without removing the average from the first tree. And the EBM style interpretability can be calculated internally by subtracting in C++ or Python part. With the average being stored, it won't be difficult to calculate the EBM style interpretability. In other words, we subtract the average from the model only when calculating the EBM style interpretability, but leave the prediction logic and model file unchanged.
References
The text was updated successfully, but these errors were encountered:
Closed in favor of being in #2302. We decided to keep all feature requests in one place.
Welcome to contribute this feature! Please re-open this issue (or post a comment if you are not a topic starter) if you are actively working on implementing this feature.
Summary
Given that
boost_from_average
is used in training, expose this value on the model as an attribute.Motivation
TLDR; It is an important value to know in many post-processing and post-analysis calculations: Feature Importance, Concept Drift, Tree Plots, etc.
For more information see: #4234 #4235 #3905 #4065.
Description
from #4235 (comment)
References
The text was updated successfully, but these errors were encountered: