-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LightGBM Model not being saved correctly #2517
Comments
for the parameter logging error, refer to #2208, it is a known issue, but don't affect the usage. For the second question, I don't understand it clearly. Did you mean that removing the |
@guolinke Thank you for your response! On passing the exact same parameters to Native LightGBM API and Sklearn API, I am getting different results. However, no matter the number of times I run the respective models, I get exactly the same results for them individually. Additionally, initially both Native LightGBM API and Sklearn API were giving the same results, but after minor changes in parameters and syntax, it's now not coming the same. Find below the code for reference:
|
@Sanchita-P yeah, these loggings are just for recording the used parameters in experiments, it doesn't be loaded when loading model. for the parameters consistency problem, you can try to use a new
lgb_train is lazy-inited, and only inited one-time, so it will be constructed in the cv part. And some parameters (like As for the randomness, you can set |
@guolinke Using a new lgb_train before lgb.train worked! Thank you so much, I have spent the last 15 hours trying to figure it out and was unable to find the solution. You're a savior :) |
I think this could be closed. |
I am saving my model by doing this:
best_gbm.save_model('best_gbm_raw_v2.1.txt', num_iteration=num_boost_round)
However, when I am going through the txt file, I see that num_iterations parameter is 100 which is the default parameter. This is incorrect since I am explicitly passing num_boost_round while saving and it is not 100. Apart from this, all the other parameters are being correctly saved. What could be causing this?
In case you'd like to see the training line:
best_gbm = lgb.train(params=best, train_set=lgb_train, num_boost_round=num_boost_round)
(best doesn't have num_iterations as a parameter)Additionally, another quick question. Initially, when I was passing exactly the same parameters in the native lightgbm and sklearn api, I was getting exactly the same results. However, after making a few changes in the code, for eg adding num_boost_round instead of fixed number of iterations, the results for lightgbm and sklearn api are coming significantly different (feature importance and performance metrics). I can't figure out what's going wrong.
Here, is the line I am using to train both the models:
PS: No matter the number of times I run the respective models, I get exactly the same results for them individually.
The text was updated successfully, but these errors were encountered: