Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync #1

Merged
merged 5 commits into from
Jan 17, 2019
Merged

sync #1

merged 5 commits into from
Jan 17, 2019

Conversation

qqfish
Copy link
Owner

@qqfish qqfish commented Jan 17, 2019

  • Motivation for features / changes

  • Technical description of changes

  • Screenshots of UI changes

  • Detailed steps to verify changes work correctly (as executed by you)

  • Alternate designs / implementations considered

wchargin and others added 5 commits January 17, 2019 10:03
Summary:
Testing on fresh Trusty VMs on GCP suggests that this may suffice to
stop the strange errors wherein Bazel invokes GCC with invalid flags,
though it should be noted that we do not actually understand the root
cause or the mechanism of failure.

Test Plan:
That this build should pass on Travis does not actually indicate that
this commit fixes the underlying problem, because the build will use
a new Bazel cache due to the version change. So: merge now, and hope
that this fixes the problem.

wchargin-branch: travis-bazel-0.21.0
Summary:
This reverts part of d285c7d because
the upstream package has been fixed.

Test Plan:
Running `from tensorflow.compat import v1` fails in the 20190114
virtualenv but works in the 20190115 virtualenv. Building the Pip
package with `bazel run //tensorboard/pip_package:build_pip_package`
also works.

wchargin-branch: unpin-tf-nightly
* Add flags to TBContext

* Add flags to TBContext in application

* Add flags to tbcontext

* gitignore changes

* fix link

* spacing
d3.formatter now supports trim trailing zero with "~". For instance,
number 0.1 used to result in "0.100" but now is "0.1". This does not
affect numbers like "0.1234" and both formatter result in "0.123".
Summary:
We’ve migrated all users inside Google to use the explicitly versioned
endpoints (either `summary:summary_v1` or `summary:summary_v2`), so we
can now enforce that going forward.

Any existing users should migrate to `//tensorboard/summary:summary_v1`,
which is API-compatible and stable.

Test Plan:
That `bazel build //tensorboard` and `bazel query 'deps(//...)'` still
work suffices.

wchargin-branch: restrict-summary-visibility
@qqfish qqfish merged commit a4e2d47 into qqfish:master Jan 17, 2019
qqfish pushed a commit that referenced this pull request Feb 26, 2019
Summary:
This commit adds a module `tensorboard.notebook` that can be loaded as a
Jupyter extension, providing the `%tensorboard` cell magic to launch and
display TensorBoard instances within a notebook.

This code doesn’t have to be perfect right now; one main goal for this
PR is to sync the structure of this code into google3, where it’s easier
to work on and test Colab integration.

The UI looks like this:
![Screenshot of a `%tensorboard` command in the Jupyter UI.][1]

[1]: https://user-images.githubusercontent.com/4317806/52386325-7ae7eb80-2a3a-11e9-93ab-fc9a689de51c.png

Test Plan:
Currently, this requires a bit of path hackery to get working:

 1. Run `bazel build //tensorboard` to build the main binary.
 2. Run `which tensorboard` to find the `tensorboard` binary provided by
    your virtualenv, and hit it with `chmod -x` to make it not
    executable.
 3. Run `export PATH="$(readlink -e ./bazel-bin/tensorboard):$PATH"` to
    add it to your path, and confirm that `which tensorboard` points to
    the built version rather than the version in your virtualenv.
 4. Run `jupyter notebook` to start the Jupyter server.
 5. Create a notebook and execute `%load_ext tensorboard.notebook` to
    load the extension; henceforth, `%tensorboard` should work until you
    restart the Jupyter kernel.

(Step (2) is necessary because the `jupyter notebook` runtime adds the
virtualenv _back_ to the front of your `PATH`. An alternative is to
patch `os.environ["PATH"]` from within the Jupyter notebook.)

After setting it up as above, the following makes a good test plan
(assuming that you have no other TensorBoard instances running):

  - `%tensorboard --logdir ~/tb/mnist --port 6006` (should launch)
  - `%tensorboard --logdir ~/tb/mnist --port 6006` (should reuse)
  - `%tensorboard --logdir ~/tb/images_demo --port 6006` (should fail)
  - `%tensorboard --logdir ~/tb/images_demo --port 6007` (should launch)
  - `%tensorboard --logdir ~/tb/mnist --port 6006` (should reuse #1)
  - multiple `%tensorboard`s in a single cell:

    ```py
    for i in ("images_demo", "audio_demo"):
        %tensorboard --logdir ~/tb/$i --port 0
    ```
  - `from tensorboard import notebook`
  - `notebook.list()` (should list four instances)
  - `notebook.display(port=6006)`
  - `notebook.display(height=800)`

Finally, if you skip (or revert) step (2) from the setup instructions,
you can see the timeout behavior, because we’ll invoke the `tensorboard`
provided by PyPI, which does not yet know how to write TensorboardInfo.

wchargin-branch: notebook-jupyter
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants