Skip to content

Latest commit

 

History

History
276 lines (199 loc) · 6.06 KB

DistanceGapMetric.md

File metadata and controls

276 lines (199 loc) · 6.06 KB

TFSimilarity.training_metrics.DistanceGapMetric

Encapsulates metric logic and state.

TFSimilarity.training_metrics.DistanceGapMetric(
    distance, name=None, **kwargs
)

Args

name (Optional) string name of the metric instance.
dtype (Optional) data type of the metric result.
**kwargs Additional layer keywords arguments.

Standalone usage:

m = SomeMetric(...)
for input in ...:
  m.update_state(input)
print('Final result: ', m.result().numpy())

Usage with compile() API:

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
              loss=tf.keras.losses.CategoricalCrossentropy(),
              metrics=[tf.keras.metrics.CategoricalAccuracy()])

data = np.random.random((1000, 32))
labels = np.random.random((1000, 10))

dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)

model.fit(dataset, epochs=10)

To be implemented by subclasses:

  • init(): All state variables should be created in this method by calling self.add_weight() like: self.var = self.add_weight(...)
  • update_state(): Has all updates to the state variables like: self.var.assign_add(...).
  • result(): Computes and returns a scalar value or a dict of scalar values for the metric from the state variables.

Example subclass implementation:

class BinaryTruePositives(tf.keras.metrics.Metric):

  def __init__(self, name='binary_true_positives', **kwargs):
    super(BinaryTruePositives, self).__init__(name=name, **kwargs)
    self.true_positives = self.add_weight(name='tp', initializer='zeros')

  def update_state(self, y_true, y_pred, sample_weight=None):
    y_true = tf.cast(y_true, tf.bool)
    y_pred = tf.cast(y_pred, tf.bool)

    values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))
    values = tf.cast(values, self.dtype)
    if sample_weight is not None:
      sample_weight = tf.cast(sample_weight, self.dtype)
      sample_weight = tf.broadcast_to(sample_weight, values.shape)
      values = tf.multiply(values, sample_weight)
    self.true_positives.assign_add(tf.reduce_sum(values))

  def result(self):
    return self.true_positives

Methods

merge_state

merge_state(
    metrics
)

Merges the state from one or more metrics.

This method can be used by distributed systems to merge the state computed by different metric instances. Typically the state will be stored in the form of the metric's weights. For example, a tf.keras.metrics.Mean metric contains a list of two weight values: a total and a count. If there were two instances of a tf.keras.metrics.Accuracy that each independently aggregated partial state for an overall accuracy calculation, these two metric's states could be combined as follows:

>>> m1 = tf.keras.metrics.Accuracy()
>>> _ = m1.update_state([[1], [2]], [[0], [2]])
>>> m2 = tf.keras.metrics.Accuracy()
>>> _ = m2.update_state([[3], [4]], [[3], [4]])
>>> m2.merge_state([m1])
>>> m2.result().numpy()
0.75
Args
metrics an iterable of metrics. The metrics must have compatible state.
Raises
ValueError If the provided iterable does not contain metrics matching the metric's required specifications.

reset_state

reset_state()

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

result

View source

result()

Computes and returns the scalar metric value tensor or a dict of scalars.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

Returns
A scalar tensor, or a dictionary of scalar tensors.

update_state

View source

update_state(
    labels, embeddings, sample_weight
)

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means: a) Operations on the same resource are executed in textual order. This should make it easier to do things like add the updated value of a variable to another, for example. b) You don't need to worry about collecting the update ops to execute. All update ops added to the graph by this function will be executed. As a result, code should generally work the same way with graph or eager execution.

Args
*args
**kwargs A mini-batch of inputs to the Metric.