-
Notifications
You must be signed in to change notification settings - Fork 527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
input_fn
called multiple times in Estimator.train
#143
Comments
There seem to be two negative effects of this:
Am I right? @cweill |
@shendiaomo: You are correct on both counts. For this reason, we request that the user configures the Assuming each adanet iteration trains over several epochs, 2. should be less of an issue in practice if your base learners are randomly initialized. They will tend to learn different biases, and form a strong ensemble regardless. |
Great! Thanks for the explanation. However, that's sort of not handy to do the math, imagine someone wants to replace the DNNClassifier in her application into adanet.Estimator, there may be lots of work. Will you have a plan to improve this? Or will the Keras version avoid the same situation? |
From the tutorials:
If I want to train with 100 epochs over one Adanet iteration, meaning I have a sample size of 5265 and batch sizes of 50, so I have about 105 update steps per epoch. Should my |
Pinging |
adanet/adanet/core/estimator.py
Lines 896 to 900 in 712bc8e
It seems to be problematic because
adanet.Estimator.train
would load data from scratch at every iteration.As tensorflow/tensorflow#19062 (comment) said, in canned TF estimators
train
is called once.The text was updated successfully, but these errors were encountered: