You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by flippercy October 28, 2022
Hi Team FLAML:
I have a few general questions regarding FLAML:
Could I use a customized learner for the ensemble? For example, for a linear combination, I want to make sure all the coefficients are negative.
I noticed that the search of automl is much more biased when running distributedly on compute clusters, which means that it focuses much more on 'strong' learners. For example, this is the summary from my recent search:
Obviously FLAML spent the majority of time on LearnerE, which is the optimal one; it is understandable, however, the distribution of attempts on different learners is much more balanced when running FLAML on a VM. Could you explain this phenomenon, please? Usually we use FLAML not just to look for the overall optimal solution, but also the best result per learner.
Thank you!
If we set cost_attr for BlendSearch as None, then the number of iterations will be used as cost measurement. It's supposed to allocate # trials to different learners in a more balanced manner in the parallel setting.
The text was updated successfully, but these errors were encountered:
Discussed in #779
Originally posted by flippercy October 28, 2022
Hi Team FLAML:
I have a few general questions regarding FLAML:
Could I use a customized learner for the ensemble? For example, for a linear combination, I want to make sure all the coefficients are negative.
I noticed that the search of automl is much more biased when running distributedly on compute clusters, which means that it focuses much more on 'strong' learners. For example, this is the summary from my recent search:
Obviously FLAML spent the majority of time on LearnerE, which is the optimal one; it is understandable, however, the distribution of attempts on different learners is much more balanced when running FLAML on a VM. Could you explain this phenomenon, please? Usually we use FLAML not just to look for the overall optimal solution, but also the best result per learner.
Thank you!
If we set
cost_attr
for BlendSearch as None, then the number of iterations will be used as cost measurement. It's supposed to allocate # trials to different learners in a more balanced manner in the parallel setting.The text was updated successfully, but these errors were encountered: