-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check whether values are constant before smooth #1698
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m really hesitant about this code. It’s hacking around a problem
without actually identifying or fixing the root cause. Checking whether
a sequence of floating point-numbers are bitwise equal doesn’t have any
useful meaning; reals are not an equality type. It it brittle: sure, it
solves the problem for exactly-constant time series, but it does not
solve the problem for time series that have even the tiniest
oscillations.
However, due to an unrelated series of unfortunate events, it happens
that the code is approximately equivalent to a less objectionable
version. In particular, our public APIs only emit single-precision
floating point numbers, so the problematic time series with tiny
oscillations cannot actually occur (unless people make their own tensor
protos and summary protobufs manually). The dubious identity-check is
then equivalent to a more reasonable range-check due to the increased
relative precision of JavaScript numbers. So, as much as I dislike this
code, I’m okay with merging it (modulo inline), because plotting
constant summaries really is something that we should support correctly,
and I don’t believe that it’s possible to construct an example where
this code does the wrong thing.
d.smoothed = nextVal; | ||
} else { | ||
// This arithematic causes IEEE 754 floating-point precision error and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this comment. It doesn’t contain any useful information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also update: tensorboard/components/vz_line_chart/vz-line-chart.ts
@wchargin in case of Also, plan on deprecating vz-line-chart and have no plans to fix that. |
Discussed offline. We do indeed plan to remove vz-line-chart (see #1700
I’m not saying that TensorBoard should attempt to detect and hide small from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorboard.plugins.scalar import metadata
LOGDIR = "/tmp/constantish"
STEPS = 32
def float64pb(name, scalar):
"""Create a scalar summary with a float64-tensor payload."""
nparray = np.array(scalar, dtype=np.float64)
tensor = tf.make_tensor_proto(nparray.astype(np.float64))
summary = tf.Summary()
summary_metadata = metadata.create_summary_metadata(
display_name=name, description="")
tf_summary_metadata = tf.SummaryMetadata.FromString(
summary_metadata.SerializeToString())
summary.value.add(tag='%s/scalar_summary' % name,
metadata=tf_summary_metadata,
tensor=tensor)
return summary
def main():
x1 = 1.0
x2 = 1.0 + 2e-16
assert x1 != x2, (repr(x1), repr(x2))
with tf.summary.FileWriter(LOGDIR) as writer:
for step in xrange(STEPS):
x = x1 if step < STEPS / 2 else x2
summ = float64pb("constantish", x)
writer.add_summary(summ, global_step=step)
print("Done.")
if __name__ == "__main__":
main() This is what I meant about not solving the underlying problem: the exact As I mentioned previously, the only reason that I’m not concerned about |
At least imo, if data contains all constant but one value that is perturbed by 2e-16 like your example, it should draw as is today. TensorBoard cannot make any assumption and cannot tell that perturbation apart from a tiny gaussian noise added to a large constant (e.g., input data has very small variance close to the precision limit of IEEE 754). Our chart should NOT show a straight line in either case and show spiked value with very small extent to convey that there is some discrepancy. Perhaps I am reading between lines but I don't think #786 is expecting us to show those as straight lines. Anyways, your concern is noted and got that there is no action item. |
Absolutely. I agree. It should not show a single straight line; it But this is not what is happening. We’re rendering the data as is but |
argh, right. I forgot the arithmetic error |
Right. The actual value of |
Due to IEEE 754 floating-point precision, multiplying floating smoothing factor into a value can cause discrepency in otherwise mathematically constant value. Please see tensorflow#786 for examples of weird spikes/messed up y-scale. Fixes tensorflow#786.
Due to IEEE 754 floating-point precision, multiplying floating smoothing
factor into a value can cause discrepency in otherwise mathematically
constant value.
Please see #786 for examples of weird spikes/messed up y-scale.
Partially addresses #786.