Releases: tensorflow/adanet
Releases · tensorflow/adanet
AdaNet v0.9.0
- Drop support for Tensorflow 1.* . Only TensorFlow >= 2.1 is supported.
- Drop support for Python 2.* . Only Python >= 3.6 is supported.
- Preserved the outputs in the
PredictionOutput
that are not in thebest_export_outputs
. - Add
warm_start
support to adanetEstimators
. - Added support for predicting/serving on TPU.
- Introduce support for
AutoEnsembleTPUEstimator
. - Introduce experimental
adanet.experimental
Keras ModelFlow APIs. - Replace reports.proto with simple serialized JSON. No longer have proto dependencies.
AdaNet v0.8.0
- Add support for TensorFlow 2.0.
- Begin developing experimental Keras API for auto-ensembling.
- Support advanced subnetworks and subestimators that need to read and write from disk by giving them a dedicated subdirectory in
model_dir
. - Fix race condition in parallel evaluation during distributed training.
- Support subnetwork hooks requesting early stopping.
- Adding AdaNet replay. The ability to rerun training without having to determine the best candidate for the iteration. A list of best indices from the previous run is provided and honored by AdaNet.
- Introduced
adanet.ensemble.MeanEnsembler
with a basic implementation for taking the mean of logits of subnetworks. This also supports including the mean of last_layer (helpful if subnetworks have same configurations) in thepredictions
andexport_outputs
of the EstimatorSpec. - BREAKING CHANGE: AdaNet now supports arbitrary metrics when choosing the best ensemble. To achieve this, the interface of
adanet.Evaluator
is changing. TheEvaluator.evaluate_adanet_losses(sess, adanet_losses)
function is being replaced withEvaluator.evaluate(sess, ensemble_metrics)
. Theensemble_metrics
parameter contains all computed metrics for each candidate ensemble as well as theadanet_loss
. Code which overridesevaluate_adanet_losses
must migrate over to use the newevaluate
method (we suspect that such cases are very rare). - Allow user to specify a maximum number of AdaNet iterations.
- BREAKING CHANGE: When supplied, run the
adanet.Evaluator
beforeEstimator#evaluate
,Estimator#predict
, andEstimator#export_saved_model
. This can have the effect of changing the best candidate chosen at the final round. When the user passes an Evaluator, we run it to establish the best candidate during evaluation, predict, and export_saved_model. Previously they used the adanet_loss moving average collected during training. While the previous ensemble would have been established by the Evaluator, the current set of candidate ensembles that were not done training would be considered according to the adanet_loss. Now when a user passes an Evaluator that, for example, uses a hold-out set, AdaNet runs it before making predictions or exporting a SavedModel to use the best new candidate according to the hold-out set. - Support
tf.keras.metrics.Metrics
during evaluation. - Allow users to disable summaries to reduce memory and disk footprint.
- Stop individual subnetwork training on
OutOfRangeError
raised during bagging. - Train forever if
max_steps
andsteps
are bothNone
.
AdaNet v0.7.0
- Add embeddings support on TPU via
TPUEmbedding
. - Train the current iteration forever when
max_iteration_steps=None
. - Introduce
adanet.AutoEnsembleSubestimator
for training subestimators on different training data partitions and implement ensemble methods like bootstrap aggregating (a.k.a bagging). - Fix bug when using Gradient Boosted Decision Tree Estimators with
AutoEnsembleEstimator
during distributed training. - Allow
AutoEnsembleEstimator's
candidate_pool
argument to be alambda
in order to createEstimators
lazily. - Remove
adanet.subnetwork.Builder#prune_previous_ensemble
for abstract class. This behavior is now specified usingadanet.ensemble.Strategy
subclasses. - BREAKING CHANGE: Only support TensorFlow >= 1.14 to better support TensorFlow 2.0. Drop support for versions < 1.14.
- Correct eval metric computations on CPU and GPU.
AdaNet v0.6.2
- Fix n+1 global-step increment bug in
adanet.AutoEnsembleEstimator
. This bug incremented the global_step by n+1 for n cannedEstimators
likeDNNEstimator
.
AdaNet v0.6.1
- Maintain compatibility with TensorFlow versions >=1.9.
AdaNet v0.6.0
- Officially support AdaNet on TPU using
adanet.TPUEstimator
withadanet.Estimator
feature parity. - Support dictionary candidate pools in
adanet.AutoEnsembleEstimator
constructor to specify human-readable candidate names. - Improve AutoEnsembleEstimator ability to handling custom
tf.estimator.Estimator
subclasses. - Introduce
adanet.ensemble
which contains interfaces and examples of ways to learn ensembles using AdaNet. Users can now extend AdaNet to use custom ensemble-learning methods. - Record TensorBoard
scalar
,image
,histogram
, andaudio
summaries on TPU during training. - Add debug mode to help detect NaNs and Infs during training.
- Improve subnetwork
tf.train.SessionRunHook
support to handle more edge cases. Maintain compatibility with TensorFlow versions 1.9 thru 1.13Only works for TensorFlow version >=1.13. Fixed in AdaNet v0.6.1.- Improve documentation including adding 'Getting Started' documentation to adanet.readthedocs.io.
- BREAKING CHANGE: Importing the
adanet.subnetwork
package usingfrom adanet.core import subnetwork
will no longer work, because the package was moved to theadanet/subnetwork
directory. Most users should already be usingadanet.subnetwork
orfrom adanet import subnetwork
, and should not be affected.
AdaNet v0.5.0
- Support training on TPU using
adanet.TPUEstimator
. - Allow subnetworks to specify
tf.train.SessionRunHook
instances for training withadanet.subnetwork.TrainOpSpec
. - Add API documentation generation with Sphinx.
- Fix bug preventing subnetworks with Resource variables from working beyond the first iteration.
AdaNet v0.4.0
- Add
shared
field toadanet.Subnetwork
to deprecate, replace, and be more flexible thanpersisted_tensors
. - Officially support multi-head learning with or without dict labels.
- Rebuild the ensemble across iterations in Python without a frozen graph. This allows users to share more than
Tensors
between iterations including Python primitives, objects, and lambdas for greater flexibility. Eliminating reliance on aMetaGraphDef
proto also eliminates I/O allowing for faster training, and better future-proofing. - Allow users to pass custom eval metrics when constructing an
adanet.Estimator
. - Add
adanet.AutoEnsembleEstimator
for learning to ensembletf.estimator.Estimator
instances. - Pass labels to
adanet.subnetwork.Builder
'sbuild_subnetwork
method. - The TRAINABLE_VARIABLES collection will only contain variables relevant to the current
adanet.subnetwork.Builder
, so not passingvar_list
to theoptimizer.minimize
will lead to the same behavior as passing it in by default. - Using
tf.summary
insideadanet.subnetwork.Builder
is now equivalent to using theadanet.Summary
object. - Accessing the
global_step
from within anadanet.subnetwork.Builder
will return theiteration_step
variable instead, so that the step starts at zero at the beginning of each iteration. One subnetwork incrementing the step will not affect other subnetworks. - Summaries will automatically scope themselves to the current subnetwork's scope. Similar summaries will now be correctly grouped together correctly across subnetworks in TensorBoard. This eliminates the need for the
tf.name_scope("")
hack. - Provide an override to force the AdaNet ensemble to grow at the end of each iteration.
- Correctly seed TensorFlow graph between iterations. This breaks some tests that check the outputs of
adanet.Estimator
models.
AdaNet v0.3.0
- Add official support for tf.keras.layers.
- Fix bug that incorrectly pruned colocation constraints between iterations.
AdaNet v0.2.0
- Estimator no longer creates eval metric ops in train mode.
- Freezer no longer converts Variables to constants, allowing AdaNet to handle Variables larger than 2GB.
- Fixes some errors with Python 3.