Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync polymer2 to current master #2287

Merged
merged 97 commits into from
Jun 27, 2019
Merged

Conversation

stephanwlee
Copy link
Contributor

@stephanwlee stephanwlee commented May 29, 2019

Conflicts manually resolved for

stephanwlee and others added 28 commits May 15, 2019 10:56
Repro steps:
1. Select a run that has more than one tags
2. Select a tag with RunMetadata
3. Select a node
4. Select the default tag
5. Click nodes around and see JSEs

We have nested dom-ifs to prevent `getTotalMicros` from being called.
However, this is only effective when the DOM is not mounted but this
assumption does not hold after first render (if=true).

JSE = JavaScript Exception
Summary:
This test seems to be fine in open-source land, but failed 18/32 times
within Google, because the wire format order for maps is undefined:
<https://developers.google.com/protocol-buffers/docs/proto3#maps>

Test Plan:
Within google3, cherry-pick this patch onto <http://cl/247655537>, then
run `:summary_v2_notf_test` with `--runs_per_test=128` to check for
flakiness.

wchargin-branch: hparams-test-deflake
Summary:
The smoke test doesn’t actually catch those things that it’s supposed to
catch, because it installs `tf-nightly` (or `tf-nightly-2.0-preview`),
which in turn installs `tb-nightly`. We then install the just-built
version of `tensorboard`, so the resulting virtualenv has both
`tensorboard` and `tb-nightly`: tensorflow#1907 strikes again.

When `--tf-version ""` is passed, this is not an issue, but some smoke
checks only run when that is not passed.

It just so happens that this is currently benign: each of the direct
dependencies of `:build_pip_package` either has no smoke test, contains
an `__init__.py` file that is required to overwrite the Bazel-generated
empty one, because the smoke test references symbols directly in that
file, or is not yet in `tb-nightly`. But (a) starting with tomorrow’s
nightly, the mesh smoke test will meet none of those conditions, so its
smoke test will cease to be effective, and (b) I’d like to add an
hparams smoke test, which already falls into the same boat.

Test Plan:
Add an hparams smoke test, but remove the actual hparams dep:

```diff
diff --git a/tensorboard/pip_package/BUILD b/tensorboard/pip_package/BUILD
index c11e22e2..e63b790a 100644
--- a/tensorboard/pip_package/BUILD
+++ b/tensorboard/pip_package/BUILD
@@ -21,7 +21,6 @@ sh_binary(
         "//tensorboard:lib",  # User-facing overall TensorBoard API
         "//tensorboard:version",  # Version module (read by setup.py)
         "//tensorboard/plugins/beholder",  # User-facing beholder API
-        "//tensorboard/plugins/hparams",  # User-facing hparams API
         "//tensorboard/plugins/mesh:summary",  #  User-facing mesh summary API
         "//tensorboard/plugins/projector",  # User-facing projector API
         "//tensorboard/summary:tf_summary",  #  tf.summary API for TF 2.0
diff --git a/tensorboard/pip_package/build_pip_package.sh b/tensorboard/pip_package/build_pip_package.sh
index de778196..113724ef 100755
--- a/tensorboard/pip_package/build_pip_package.sh
+++ b/tensorboard/pip_package/build_pip_package.sh
@@ -172,6 +172,7 @@ import tensorboard as tb
 tb.summary.scalar_pb('test', 42)
 from tensorboard.plugins.beholder import Beholder, BeholderHook
 from tensorboard.plugins.mesh import summary
+from tensorboard.plugins.hparams.api import hparams_pb, KerasCallback
 "
   fi

```

Then run

    bazel run //tensorboard/pip_package:build_pip_package

and note that the smoke test passes (incorrectly) before this commit and
fails (correctly) after it.

Note also that the `pip freeze` output from the smoke test includes both
`tb-nightly` and `tensorboard` before this change (e.g., on recent
Travis logs), but only `tensorboard` after it.

wchargin-branch: pip-smoke-2dep
Summary:
Users often report problems that depend on environment-specific
configuration. Rather than asking them to find all this information and
enter it into an issue template manually, we can ask them to run a
script and paste its output verbatim into the issue. Furthermore, we can
detect and suggest fixes to common problems, such as tensorflow#1907 and tensorflow#2010.
The script can grow as we see fit to add new diagnoses and suggestions.

Test Plan:
The script is designed to be robust to errors in each individual
diagnosis: in the worst case, it prints a stack trace and continues to
the next one. I’ve manually tested that the main framework works in
Python 2 and 3 on Linux and in Python 3 on Windows.

Automated testing of this script is possible, but would take a fair
number of CPU cycles to run tests (setting up virtualenvs and Conda,
installing and importing TensorFlow many times). Given that this script
is never a production dependency and is explicitly designed to be run in
a discussion context, light testing seems reasonable.

To simulate a bad hostname, add

```python
socket.getfqdn = lambda: b"\xc9".decode("utf-8")
```

to the top of the script.

To simulate a bad `.tensorboard-info` directory and test the quoting
behavior, run

```shell
export TMPDIR="$(mktemp -d)/uh oh/" &&
mkdir -p "${TMPDIR}/.tensorboard-info" &&
chmod 000 "${TMPDIR}/.tensorboard-info" &&
python ./tensorboard/tools/diagnose_tensorboard.py
```

To cross-check the autoidentification logic:

```
$ python tensorboard/tools/diagnose_tensorboard.py |
> awk '/version / { print $NF }'
e093841ffaea564cb2410e0b430bd0c552ada208
$ git hash-object tensorboard/tools/diagnose_tensorboard.py
e093841ffaea564cb2410e0b430bd0c552ada208
$ git rev-parse HEAD:tensorboard/tools/diagnose_tensorboard.py
e093841ffaea564cb2410e0b430bd0c552ada208
$ git cat-file blob e093841ffaea564cb2410e0b430bd0c552ada208 |
> diff -u tensorboard/tools/diagnose_tensorboard.py - | wc -l
0
```

wchargin-branch: diagnose-me
Summary:
Fixes tensorflow#2121.

Test Plan:

    $ git grep '__metaclass__' | wc -l
    0

wchargin-branch: six-add-metaclass
Summary:
Fixes tensorflow#2181. Thankfully, the `rules_webtesting` scene has gotten a bit
saner, requiring fewer explicitly defined transitive dependencies,
though it looks like the ones that we do have need to be manually kept
in sync (e.g., `bazel_skylib`).

Test Plan:
Running `bazel build //...` and `bazel test //...` works in Bazel 0.25.2
in both Python 2 and Python 3.

wchargin-branch: bazel-0.25
Summary:
It’s easier for TensorBoard team members to upload files to the
TensorFlow.org mirror than the Bazel mirror. Many of our files were not
mirrored, and fetches from GitHub can be noticeably slow. (For instance,
the Bazel 0.22.0 binary is 1.8–2.0 seconds from the TF mirror and 8.5–12
seconds from GitHub.) As of this commit, all `mirror.bazel.build` URLs
are replaced with `mirror.tensorflow.org` URLs, and all such URLs
contain valid content.

Along the way, I swapped out the defunct numericjs source to a host that
actually still resolves.

On Travis, getting Bazel and running `bazel fetch //tensorboard/...`
typically takes about 44 seconds. Let’s see if this speeds that up.

Test Plan:
Check that Bazel URLs resolve:

```
$ git grep -Pho '(?<=")https?://mirror\.tensorflow\.org/[^"]*' |
> grep -vF '${BAZEL}' | sort | uniq |
> xargs -n 1 -P 32 -- \
> sh -c 'curl -sfL "$1" >/dev/null && echo OK || echo FAIL' unused |
> sort | uniq -c
    231 OK
```

Check that the Travis URL resolves:

```
$ BAZEL=0.22.0
$ curl -sfL "http://mirror.tensorflow.org/github.com/bazelbuild/bazel/releases/download/${BAZEL}/bazel-${BAZEL}-linux-x86_64" |
> shasum -a 256
8474ed28ed4998e2f5671ddf3a9a80ae9e484a5de3b8b70c8b654c017c65d363
```

Check that we caught ’em all:

```
$ git grep mirror.bazel | wc -l
0
```

wchargin-branch: mirror-tensorflow-org
Summary:
This commit adds issue templates for bug reports, feature requests, and
installation problems, with specific instructions for each.

This also resolves the nag from GitHub to move to the new issue template
workflow.

I created these by following the “issue template wizard” described here:
<https://help.github.com/en/articles/creating-issue-templates-for-your-repository>

Test Plan:
I’ve committed this to the default branch of my fork. Try drafting an
issue there: <https://github.com/wchargin/tensorboard/issues/>.

wchargin-branch: github-issue-templates
At PR tensorflow#2168, it was recommended to call mesh_visualizer plugin a mesh
plugin. As a result, the Python code lives in tensorboard/plugins/mesh
while the frontend lives in tensorboard/plugins/mesh_visualizer. This
change moves frontend code into t/p/mesh.
While debugging the standalone projector app, we discovered a bug caused
by the divergence between the umap-js and the `umap.d.ts` typings that
we maintain in this repo. This bug caused an error when initializing
very small datasets. This PR fixes the incorrect usage of
`umap.initializeFit` and updates the types to match the library.
Summary:
In tensorflow#2188, I removed the `group_name` parameter from the Keras callback,
but forgot to remove it from the call site in the demo code.

Test Plan:
Running `bazel run //tensorboard/plugins/hparams:hparams_demo` now
works. Previously, it failed with the error, “unexpected keyword
argument 'group_name'”.

wchargin-branch: hparams-demo-no-group-name
Summary:
A bunch of changes here required due to `--incompatible_depset_union`
and consequent transitive upgrades.

The new depset union makes inefficient use more apparent. I opted to
change our own uses as locally and minimally as possible rather than
fixing them to be more efficient, for the sake of keeping things simple.
(Or, rather, I first tried to do things the efficient way by
accumulating lists of transitive depsets, but due to lack of types the
refactoring was too hard to keep track of to be worth it. The simple
option just requires application of one of two Vim macros at each call
site, with just a bit of thought to guess which one is right.)

The change to `Vulcanize.java` was already present within google3.

Test Plan:
Both `bazel build //tensorboard` and `bazel test //...` work with Bazel
version 0.26.0rc12.

wchargin-branch: bazel-0.26.0rc12
…orflow#2268)

Cloud AI Platform models can have custom prediction code that takes in examples in any format the modeler wants. But WIT requires examples as lists, dicts, or example protos by design. So add the ability to transform examples to the custom format for cloud users that need this capability.
Summary:
Caught during a Clutz upgrade. From context, it looks like `ints` should
be an `Int64List`, as is already the case in `parseFeature` below.

Test Plan:
This preserves existing behavior. To exercise the codepath, run

```
bazel run //tensorboard/plugins/interactive_inference/tf_interactive_inference_dashboard/demo:agedemoserver
```

then navigate to `/tf-interactive-inference-dashboard/age_demo.html` in
a browser. Select a data point in the plot. In the list of features for
that data point (on the left side of the screen), click the “+” button
immediately below the bottom feature. Add an `Int` feature, and set its
value to some number. Select a different data point, then select the
first data point again, and observe that the new feature and value are
still present.

wchargin-branch: vzev-ints-list-type
Per user requests, adding PR curves and F1 score to performance tab of WIT for binary classification models.
Summary:
Changes in tensorflow#2267 displeased the Google linter: `list(my_depset)` is
banned because it’s not obvious that this forces iteration, so we switch
to the far clearer `.to_list()` everywhere.

The venerable linter further demands that docstrings be added, even for
drive-by fixes.

Test Plan:
A test import succeeds; all Google-internal tests pass.

wchargin-branch: import-lint
Enabling WIT to be able to display feature-level attributions for individual predictions if the model provides it.
The change includes:
* Proto changes: crsDurationUs to AllReduceComputeDurationUs and AllReduceSyncDurationUs. Src(Dst)CoreId to Src(Dst)CoreIds. Remove replica id.
* Removing the channel id selector slider from the topology graph
* Avoid recomputing Topodata when selectedMetricIdx changed.
* Fix stack bar chart for backward compatibility
* d3 update for stack_bar_chart
* Update d3 changes for topology graph
* Improve capture profile dialog

* Make tracing attempts also a GPU option

* Address review comments

* Minor fix
@review-notebook-app
Copy link

Check out this pull request on ReviewNB: https://app.reviewnb.com/tensorflow/tensorboard/pull/2287

Visit www.reviewnb.com to know how we simplify your Jupyter Notebook workflows.

@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of all the commit author(s), set the cla label to yes (if enabled on your project), and then merge this pull request when appropriate.

ℹ️ Googlers: Go here for more info.

brunapinos and others added 9 commits June 21, 2019 12:41
Running TensorBoard in a Jupyter notebook inside a Docker container
requires explicitly exposing additional ports. This commit adds a
paragraph in the tensorboard-in-notebooks file, explaining the usage in
the **Setup** section:

![Screenshot of new “Setup” section, as rendered in Colab][1]

[1]: https://user-images.githubusercontent.com/4317806/59947291-b4d1f100-9421-11e9-852e-0bc322fb69c0.png
Summary:
We plan to promote TensorBoard’s Pip packages to Bazel targets, but the
filenames of these targets can’t actually be statically determined: they
must include the TensorBoard version number, which is defined in a
Python module. Instead, we’ll build a tarball containing the two wheels.
Building tarballs deterministically with system tools is famously
troublesome (you can do it with GNU `tar` and a lot of flags, but not
portably), and it seems that the standard approach is to use Python’s
built-in `tarfile` library.

Implementation based off of <servo/servo#12244>,
but with changes to make it Python 3-compatible, to use our code style,
and to remove some unneeded machinery. Original is Apache-2 licensed.

Test Plan:
End-to-end tests included. Replacing the body of `_run_tool` with

```python
    return subprocess.check_output(["tar", "czf"] + args)
```

causes the `test_invariant_under_mtime` test case to fail for the right
reason.

wchargin-branch: deterministic-tgz
There are many variants of the tf-graph-dashboard. We recently, to
support other type of compatibility check, added a required field of
"compat-node-title" and in the process broke other apps. This change
makes other app behave exactly the same while keeping the
customizability.
)

Upgrade inference_utils to be tf 2.0 compatible
Summary:
Running `bazel build //tensorboard/pip_package` now actually builds the
Pip package. Other targets can depend on the built wheels, which are
provided in a tarball. Smoke tests have been separated to a new
`:test_pip_package` target.

Test Plan:
Check that everything builds:

```
$ bazel build //tensorboard/pip_package 2>/dev/null
$ echo $?
0
$ tar tzf bazel-bin/tensorboard/pip_package/pip_packages.tar.gz
tensorboard-1.15.0a0-py2-none-any.whl
tensorboard-1.15.0a0-py3-none-any.whl
$ bazel run //tensorboard/pip_package:test_pip_package >/dev/null 2>&1
$ echo $?
0
$ tmpdir="$(mktemp -d)"
$ bazel run //tensorboard/pip_package:build_pip_package -- "${tmpdir}" \
> >/dev/null 2>/dev/null
$ ls -1 "${tmpdir}"
tensorboard-1.15.0a0-py2-none-any.whl
tensorboard-1.15.0a0-py3-none-any.whl
```

Then:

  - Add an `assert False` to the `pb` function in `scalar/summary.py`,
    and verify that `:test_pip_package` fails with default options but
    passes with `--tf-version ""`.
  - Add an `assert False` to `tensorboard/main.py` and verify that the
    test fails even with `--tf-version ""`.
  - Add `def foo(): yield from None` to `tensorboard/main.py`, and
    verify that, when run in a Python 3 virtualenv, the test fails by
    default but passes with `--default-python-only`.

wchargin-branch: bazel-pip-package
Summary:
Per @nfelt’s suggestion. Also unbreaks the build.

Generated with:

    git grep -Ilz '  *$' | xargs -0 sed -i 's/  *$//'

Test Plan:
Running `./tensorboard/tools/whitespace_hygiene_test.py` now passes.

wchargin-branch: cleanup-whitespace
Summary:
This plugin can eventually replace the `tensorboard-plugin-example`
repository; it’ll be easier for us to keep it up to date if it’s in the
same repository as main TensorBoard. And it has tests!

Test Plan:
Run `bazel test //tensorboard/plugins/example:smoke_test` to start.
Then, modify `setup.py` to remove the `entry_points` entry, and note
that the test fails. Separately, modify line 42 of `__init__.py` to
serve the JS under `/not_index.js` instead; the test should again fail.

Test the development workflow: Follow the instructions in `README.md`.
Then, edit `widgets.js` (say, add another exclamation point) and refresh
to see changes live without restarting TensorBoard:

![screenshot of “Hello TensorBoard!” in example plugin][1]

[1]: https://user-images.githubusercontent.com/4317806/59641012-cb700380-9114-11e9-8323-c811388df31c.png

wchargin-branch: example-dynamic-plugin
@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of all the commit author(s), set the cla label to yes (if enabled on your project), and then merge this pull request when appropriate.

ℹ️ Googlers: Go here for more info.

@@ -22,7 +22,14 @@
<link rel="apple-touch-icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMQAAADECAMAAAD3eH5ZAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAD/UExURfFlKfaELvFmKfNyK/67NvWALf68Nv69NvNxK/20NfyyNP22NfN0K/JrKvqhMv2zNf25Nf24Nf23NfeOL/yzNPyvNPJoKviWMPmeMfN1K/WBLfePL/FnKfeML/qlMvR7LPmcMfeLL/aJLvR5LPFoKfJuKvR3LP66NvywNPeNL/V/LfaILv21Nf26NfNzK/NvK/R6LPmaMfyxNPqfMvV+LfurM/iSMPmbMfJvKvmdMfumM/qiMvmZMfytNPJqKvysNPN2K/iYMPNwK/upM/JtKvJsKviVMPaHLvaGLvJpKvR8LPaKLvqkMvuqM/aFLvR4LPuoM/iTMPWDLfiRMPmYMXS0ngkAAALoSURBVHja7drnctpAFIbhFUISSKJ3MKYa0+y4xTW9937/15JkJhlTjhrSrHRmvuf/as6L0YLFCgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMBJ6njenqspzgnPrsrGX9Zpi2tCrmnc6+dYNthVY5WpMmxQLWPdMsOuYVwzNj3ei2t3mQwaV43BJPDCS2NbJ5aEeuX/+9qcjQOtfFIkIkrvY2g4MVcmOBsFWbowKO/kNyj62gRpJcDaPBlxLr1B0zdG0C/8LzbJiJrshuvy1gzlA9+rD8mIkuyIJjFE3/dqnYwoSm7IUEPoD/wut8iIguSIDjlFxe/yfXL5vuSI21BTZLLhXoOILMO8Hxwa/L8bI0LfmUdhGowb2ZvT0e57pFNDgB06IlVyjmmIBl2T/nl9Rw6SD9GgSG/Q0uQkaW3XhmovKQ3eFQ4N2Uo9OQ1eFZsNerf7vP+rO4rhmY1Lg3vFVoP8+8BXg1sFnwbnCk4NThW8GuiKBDdkVVtTNFvNelVsNqTbyWnIOM2oeTRoyWvwmpJHg/ucXBrcJuXT4DwrpwZi2vy0VCx8YtXg/D2bU4OfiuQ3eFfE2KD4bfCqiLNB993gXsGlwa2CT4NzBacGIVQ6YsipQdh0xEdODUKjIxrSp88onZ8zbbFLg1DoiFO5BXvDGv2My9/JhUT8JUZTI0yDaNHLBzIbvqTDNYhUiVw/kdjQ1kM2CHFDPjKW+KzyRTF0g/ga9w9y+fANQpxvX8CU+Ny7FUWDeF3Y+g3lROIf4k0UDX9eCyvO531PyYhHga9zvPZJU5b73Y/eXj8Hv9D48n6HaF5LbcjRt8TZTtda5M1DfXnbkX1C0SHCFKzQB5Fe8op4GNGNHavvZESbVwT5r6W1xyuCPBY3Y9YgDqzknH/e3YfNzzuL30l0IebrZ5kKtuDIXt1n868ET6kf3/49tLvrCcZyF8Pu215dAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcPIbNrBhOaBXucoAAAAASUVORK5CYII=">
<link rel="import" href="analytics.html">
<script src="web-animations-js/web-animations-next-lite.min.js"></script>
<script src="webcomponentsjs/webcomponents-lite.js"></script>
<script jscomp-minify src="webcomponentsjs/webcomponents-lite.js"></script>
<!-- HACK: Polymer2 (not 3) makes use of `super` in static methods which gets
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an important changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack; thanks for calling this out. Also related:
Polymer/polymer@cc7702b

@stephanwlee
Copy link
Contributor Author

Will manually flip the CLA after LGTM.

Copy link
Contributor

@wchargin wchargin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I skimmed through git show -w 1b986377, and the change that you called
out looked to be the only particularly noteworthy one. Let me know if
there’s anything else that you wanted to call attention to.

@@ -22,7 +22,14 @@
<link rel="apple-touch-icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMQAAADECAMAAAD3eH5ZAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAD/UExURfFlKfaELvFmKfNyK/67NvWALf68Nv69NvNxK/20NfyyNP22NfN0K/JrKvqhMv2zNf25Nf24Nf23NfeOL/yzNPyvNPJoKviWMPmeMfN1K/WBLfePL/FnKfeML/qlMvR7LPmcMfeLL/aJLvR5LPFoKfJuKvR3LP66NvywNPeNL/V/LfaILv21Nf26NfNzK/NvK/R6LPmaMfyxNPqfMvV+LfurM/iSMPmbMfJvKvmdMfumM/qiMvmZMfytNPJqKvysNPN2K/iYMPNwK/upM/JtKvJsKviVMPaHLvaGLvJpKvR8LPaKLvqkMvuqM/aFLvR4LPuoM/iTMPWDLfiRMPmYMXS0ngkAAALoSURBVHja7drnctpAFIbhFUISSKJ3MKYa0+y4xTW9937/15JkJhlTjhrSrHRmvuf/as6L0YLFCgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMBJ6njenqspzgnPrsrGX9Zpi2tCrmnc6+dYNthVY5WpMmxQLWPdMsOuYVwzNj3ei2t3mQwaV43BJPDCS2NbJ5aEeuX/+9qcjQOtfFIkIkrvY2g4MVcmOBsFWbowKO/kNyj62gRpJcDaPBlxLr1B0zdG0C/8LzbJiJrshuvy1gzlA9+rD8mIkuyIJjFE3/dqnYwoSm7IUEPoD/wut8iIguSIDjlFxe/yfXL5vuSI21BTZLLhXoOILMO8Hxwa/L8bI0LfmUdhGowb2ZvT0e57pFNDgB06IlVyjmmIBl2T/nl9Rw6SD9GgSG/Q0uQkaW3XhmovKQ3eFQ4N2Uo9OQ1eFZsNerf7vP+rO4rhmY1Lg3vFVoP8+8BXg1sFnwbnCk4NThW8GuiKBDdkVVtTNFvNelVsNqTbyWnIOM2oeTRoyWvwmpJHg/ucXBrcJuXT4DwrpwZi2vy0VCx8YtXg/D2bU4OfiuQ3eFfE2KD4bfCqiLNB993gXsGlwa2CT4NzBacGIVQ6YsipQdh0xEdODUKjIxrSp88onZ8zbbFLg1DoiFO5BXvDGv2My9/JhUT8JUZTI0yDaNHLBzIbvqTDNYhUiVw/kdjQ1kM2CHFDPjKW+KzyRTF0g/ga9w9y+fANQpxvX8CU+Ny7FUWDeF3Y+g3lROIf4k0UDX9eCyvO531PyYhHga9zvPZJU5b73Y/eXj8Hv9D48n6HaF5LbcjRt8TZTtda5M1DfXnbkX1C0SHCFKzQB5Fe8op4GNGNHavvZESbVwT5r6W1xyuCPBY3Y9YgDqzknH/e3YfNzzuL30l0IebrZ5kKtuDIXt1n868ET6kf3/49tLvrCcZyF8Pu215dAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcPIbNrBhOaBXucoAAAAASUVORK5CYII=">
<link rel="import" href="analytics.html">
<script src="web-animations-js/web-animations-next-lite.min.js"></script>
<script src="webcomponentsjs/webcomponents-lite.js"></script>
<script jscomp-minify src="webcomponentsjs/webcomponents-lite.js"></script>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the new jscomp-minify directive semantic or just nice to have?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a nice-to-have. I actually looked into this a little bit more carefully (specifically followed the difference between two code paths here:

if (node.attr("src").endsWith(".min.js")
|| getAttrTransitive(node, "jscomp-nocompile").isPresent()
|| wantsMinify) {
if (wantsMinify) {
script = minify(path, script);
}
Node newScript =
new Element(Tag.valueOf("script"), node.baseUri(), node.attributes())
.appendChild(new DataNode(script, node.baseUri()))
.removeAttr("src")
.removeAttr("jscomp-minify")
.removeAttr("jscomp-nocompile");
if (firstCompiledScript != null) {
firstCompiledScript.before(newScript);
return replaceNode(node, new TextNode("", node.baseUri()));
} else {
return replaceNode(node, newScript);
}
} else {
if (firstCompiledScript == null) {
firstCompiledScript = node;
}
sourcesFromScriptTags.put(path, script);
sourceTags.put(path, node);
) and found out that webcomponents-lite's @nocompile annotation causes JSCompiler to not include the source in the compiled JavaScript (both source and webpath references are missing). I would want to learn why JSCompiler decided to omit the source completely with the annotation but I decided to stop further investigation for now.

@@ -22,7 +22,14 @@
<link rel="apple-touch-icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMQAAADECAMAAAD3eH5ZAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAD/UExURfFlKfaELvFmKfNyK/67NvWALf68Nv69NvNxK/20NfyyNP22NfN0K/JrKvqhMv2zNf25Nf24Nf23NfeOL/yzNPyvNPJoKviWMPmeMfN1K/WBLfePL/FnKfeML/qlMvR7LPmcMfeLL/aJLvR5LPFoKfJuKvR3LP66NvywNPeNL/V/LfaILv21Nf26NfNzK/NvK/R6LPmaMfyxNPqfMvV+LfurM/iSMPmbMfJvKvmdMfumM/qiMvmZMfytNPJqKvysNPN2K/iYMPNwK/upM/JtKvJsKviVMPaHLvaGLvJpKvR8LPaKLvqkMvuqM/aFLvR4LPuoM/iTMPWDLfiRMPmYMXS0ngkAAALoSURBVHja7drnctpAFIbhFUISSKJ3MKYa0+y4xTW9937/15JkJhlTjhrSrHRmvuf/as6L0YLFCgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMBJ6njenqspzgnPrsrGX9Zpi2tCrmnc6+dYNthVY5WpMmxQLWPdMsOuYVwzNj3ei2t3mQwaV43BJPDCS2NbJ5aEeuX/+9qcjQOtfFIkIkrvY2g4MVcmOBsFWbowKO/kNyj62gRpJcDaPBlxLr1B0zdG0C/8LzbJiJrshuvy1gzlA9+rD8mIkuyIJjFE3/dqnYwoSm7IUEPoD/wut8iIguSIDjlFxe/yfXL5vuSI21BTZLLhXoOILMO8Hxwa/L8bI0LfmUdhGowb2ZvT0e57pFNDgB06IlVyjmmIBl2T/nl9Rw6SD9GgSG/Q0uQkaW3XhmovKQ3eFQ4N2Uo9OQ1eFZsNerf7vP+rO4rhmY1Lg3vFVoP8+8BXg1sFnwbnCk4NThW8GuiKBDdkVVtTNFvNelVsNqTbyWnIOM2oeTRoyWvwmpJHg/ucXBrcJuXT4DwrpwZi2vy0VCx8YtXg/D2bU4OfiuQ3eFfE2KD4bfCqiLNB993gXsGlwa2CT4NzBacGIVQ6YsipQdh0xEdODUKjIxrSp88onZ8zbbFLg1DoiFO5BXvDGv2My9/JhUT8JUZTI0yDaNHLBzIbvqTDNYhUiVw/kdjQ1kM2CHFDPjKW+KzyRTF0g/ga9w9y+fANQpxvX8CU+Ny7FUWDeF3Y+g3lROIf4k0UDX9eCyvO531PyYhHga9zvPZJU5b73Y/eXj8Hv9D48n6HaF5LbcjRt8TZTtda5M1DfXnbkX1C0SHCFKzQB5Fe8op4GNGNHavvZESbVwT5r6W1xyuCPBY3Y9YgDqzknH/e3YfNzzuL30l0IebrZ5kKtuDIXt1n868ET6kf3/49tLvrCcZyF8Pu215dAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcPIbNrBhOaBXucoAAAAASUVORK5CYII=">
<link rel="import" href="analytics.html">
<script src="web-animations-js/web-animations-next-lite.min.js"></script>
<script src="webcomponentsjs/webcomponents-lite.js"></script>
<script jscomp-minify src="webcomponentsjs/webcomponents-lite.js"></script>
<!-- HACK: Polymer2 (not 3) makes use of `super` in static methods which gets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack; thanks for calling this out. Also related:
Polymer/polymer@cc7702b

@googlebot
Copy link

A Googler has manually verified that the CLAs look good.

(Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.)

ℹ️ Googlers: Go here for more info.

@wchargin
Copy link
Contributor

Overriding CLAs because all new commits are already in master:

$ git rev-parse HEAD origin/master
1b9863777c0c54a72ac4f202fbd07a255c596ebf
cde14ef9e2a56a508a7fdaf740915bdf07300f14
$ git rev-list HEAD ^HEAD^1 ^origin/master
1b9863777c0c54a72ac4f202fbd07a255c596ebf
$ git show --no-patch --format=%s 1b9863777c0c54a72ac4f202fbd07a255c596ebf
Merge remote-tracking branch 'upstream/master' into polymer2

@wchargin
Copy link
Contributor

Friendly reminder that you wanted to merge this via a merge commit. Feel
free to temporarily enable the “Allow merge commits” option here:
https://github.com/tensorflow/tensorboard/settings#merge-button-settings

@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of all the commit author(s), set the cla label to yes (if enabled on your project), and then merge this pull request when appropriate.

ℹ️ Googlers: Go here for more info.

@googlebot
Copy link

A Googler has manually verified that the CLAs look good.

(Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.)

ℹ️ Googlers: Go here for more info.

@stephanwlee stephanwlee merged commit 2132121 into tensorflow:polymer2 Jun 27, 2019
@stephanwlee stephanwlee deleted the poly2sync branch June 27, 2019 19:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.