forked from tensorflow/tensorboard
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync #3
Merged
Merged
sync #3
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: Word on the street is that this is a common typo. Test Plan: None. wchargin-branch: readme-typo-fix
* Make tensorboard.compat.{tf,tf2} lazily loaded * add summary_dep_test.py to check for no TF dep from tb.summary.v2 * CR: rewrite lazy_load() to use closure and guaranteed single initialization * add a couple extra checks to SummaryV2DepTest * CR: avoid leaking _importlib symbol * CR: clarify memoization contract, avoid deadlock, nicer repr * CR: no moar hasattr, typo fix
Make the development of the Embedding Projector easier by adding a standalone mode with several preset datasets that are hosted on GCS and a short documentation.
* Add flags to TBContext * Add flags to TBContext in application * Add flags to tbcontext * gitignore changes * hide images, fix bug * doc updates * update * typo * merge thing
Summary: This commit adds a test case for [behavior implemented in #1781][1]. [1]: #1781 (comment) Test Plan: `bazel test //tensorboard:lazy_test` passes at HEAD, but fails after checking out `lazy.py` from 976064d. wchargin-branch: lazy-composition
Summary: This tests behavior [described in #1781][2], improving the error message along the way. [2]: #1781 (comment) Test Plan: `bazel test //tensorboard:lazy_test` passes at HEAD, but changing the implementation of `_memoize` to use `Lock` instead of `RLock` causes a deadlock (and so the test times out and fails eventually). wchargin-branch: lazy-cycles
Summary: It was [noted in #1781][3] that the `__repr__` of `LazyModule`s deserves a modicum of special attention. This commit tests that, and improves the `__repr__` for already-loaded lazy modules so that it’s clear that they are still going through the `LazyModule` proxy. (Should there be a bug in the proxy interface—say, someone expects `__setattr__` to also pass through to the underlying module—this would make it much easier to track down.) [3]: #1781 (review) Test Plan: `bazel test //tensorboard:lazy_test` passes. wchargin-branch: lazy-repr
Summary: Currently, TensorBoard does not change the default signal dispositions for any signal handlers, so if killed by SIGTERM or SIGQUIT it will exit without running its `atexit` handlers, such as the one registered by the DB import mode. As of this commit, we handle SIGTERM by exiting gracefully. We leave SIGQUIT at the default disposition (temporary files will not be cleaned up), in accordance with the GNU libc guidelines: <https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html#index-SIGQUIT> The implementation is not perfect. Ideally, we would perform our graceful cleanup and then kill ourselves with the same signal to properly inform our parent of the source of the exit: for details, see <https://www.cons.org/cracauer/sigint.html>. But `atexit` doesn’t provide a function like “please run all registered handlers now but don’t actually quit”; we might be able to implement this in Python 2.7 using `sys.exitfunc`, but that’s deprecated in Python 2.7 and removed in Python 3. If we want to do this right, we could implement our own version of `atexit` (which would not be hard: the module is tiny). For now, I’m comfortable with submitting this mostly-correct patch. Supersedes part of #1795. Test Plan: Run `bazel build //tensorboard` and add the built binary to your PATH. Then, run `tensorboard --logdir whatever && echo after`, wait for it to print the “serving” message, and send SIGTERM via `kill(1)`. Note that TensorBoard prints a message and exits cleanly, and that “`after`” is printed to the console. Patch `program.py` to add a pre-existing signal handler and `atexit` cleanup from the top of `main`: ```diff diff --git a/tensorboard/program.py b/tensorboard/program.py index da59b4d1..c07cb855 100644 --- a/tensorboard/program.py +++ b/tensorboard/program.py @@ -201,6 +201,14 @@ class TensorBoard(object): :rtype: int """ + def fake_handler(signum, frame): + print("Handling some signals...") + assert signal.signal(signal.SIGTERM, fake_handler) == signal.SIG_DFL + def fake_cleanup(): + print("Cleaning everything up...") + import atexit + atexit.register(fake_cleanup) + self._install_signal_handler(signal.SIGTERM, "SIGTERM") if self.flags.inspect: logger.info('Not bringing up TensorBoard, but inspecting event files.') ``` Then, re-run the above steps, and note that the signal handler and `atexit` handler are both executed prior to cleanup: ``` $ tensorboard --logdir whatever && echo after TensorBoard 1.13.0a0 at <hostname> (Press CTRL+C to quit) TensorBoard caught SIGTERM; exiting... Handling some signals... Cleaning everything up... after ``` Ideally, `after` should _not_ be printed; that it is is a consequence of the fact that we don’t properly propagate the WIFSIGNALED flag as described above. wchargin-branch: handle-signals
Summary: This type describes information about a running TensorBoard instance. In a future commit, TensorBoard’s main function will be outfitted to write an “info file” containing this structure to a well-known location, which other processes can inspect to find all running TensorBoard instances, the ports on which they’re listening, their arguments and context, etc. The wire format is JSON as opposed to pbtxt for two reasons: it’s more open-source friendly (tools like `jq` have no publicly available protobuf equivalents), and the Python protobuf APIs are so frustrating that it doesn’t feel right to force callers to use them, even if this means that we have to implement our own validation and serialization. Supersedes part of #1795. Test Plan: Unit tests included; run `bazel test //tensorboard:manager_test`. wchargin-branch: tensorboardinfo
Summary: This function computes the (opaque) value of the `cache_key` field of `TensorboardInfo` objects, which is used to determine whether it is safe to reuse a TensorBoard instance. See docs for more details. Test Plan: Unit tests included; run `bazel test //tensorboard:manager_test`. For a sanity check at the REPL: ``` >>> from tensorboard import manager >>> import base64, json, os, pprint >>> ck = manager.cache_key( ... working_directory=os.getcwd(), ... arguments=["--logdir", "foo"], ... configure_kwargs={}, ... ) >>> ck 'eyJhcmd1bWVudHMiOlsiLS1sb2dkaXIiLCJmb28iXSwiY29uZmlndXJlX2t3YXJncyI6e30sIndvcmtpbmdfZGlyZWN0b3J5IjoiL3Vzci9sb2NhbC9nb29nbGUv aG9tZS93Y2hhcmdpbi9naXQvdGVuc29yYm9hcmQifQ==' >>> pprint.pprint(json.loads(base64.b64decode(ck))) {u'arguments': [u'--logdir', u'foo'], u'configure_kwargs': {}, u'working_directory': u'/usr/local/google/home/wchargin/git/tensorboard'} ``` Supersedes part of #1795. wchargin-branch: manager-cache-key
Summary: The previous implementation always returned `six.binary_type`; it now returns `str` in both Python 2 and Python 3. Test Plan: Regression test added; `bazel test //tensorboard:manager_test` passes in Python 2 and 3, and fails on Python 3 if the code under test is reverted. wchargin-branch: cache-key-str
Summary: Having an (empty) `__init__.py` file enables us to import modules under `tensorboard.util` at a Python 2 REPL. (It works either way in Python 3, because `tensorboard.util` is interpreted as a namespace package.) Test Plan: Running `python -c 'from tensorboard.util import tb_logging'` after this change works in both Python 2 and Python 3, but before this change works in Python 3 but fails in Python 2 (after removing any cached `.pyc` bytecode file). wchargin-branch: util-init-py
Summary: The Google import process complains about common typos; this makes it happy. :-) Test Plan: None. wchargin-branch: wit-example-typos
Summary: Resolves root cause of Google-internal bug <http://b/117792912>, which pointed out that `argmax(y_, 1)` should just be `y_`. Test Plan: ```shell $ url='https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py' && > diff -u \ > <(curl -L --silent "${url}" | > sed -n -e 's/^ //' -e '/def variable_summaries/,/global_variables_initializer/p') \ > <(sed -n -e '/def variable_summaries/,/global_variables_initializer/p' \ > ./docs/summaries.md) && > echo $? 0 ``` wchargin-branch: update-summary-docs-code
Summary: This commit implements functions `write_info_file`, `remove_info_file`, and `get_all` on the `tensorboard.manager` module. See docs for details. Supersedes part of #1795. Test Plan: Integration tests included; run `bazel test //tensorboard:manager_test`. wchargin-branch: tensorboardinfo-io
Summary: This commit wires up `TensorBoard.main` to the `TensorboardInfo` I/O introduced in #1806. Test Plan: Run `bazel run //tensorboard -- --logdir ./whatever/`, then verify that an info file has been created… $ ls /tmp/.tensorboard-info/ pid-85532.info …and that the file is visible to the Python APIs: $ python >>> from tensorboard import manager >>> infos = manager.get_all() >>> len(infos) 1 >>> infos[0].pid 85532 >>> infos[0].port 6006 Then, SIGTERM the server and verify that this is reflected from Python… >>> import os >>> os.kill(infos[0].pid, 15) >>> manager.get_all() [] …and that the underlying file is gone: $ ls /tmp/.tensorboard-info/ | wc -l 0 wchargin-branch: write-tensorboardinfo
Summary: This function starts a new TensorBoard process with the given arguments, or reuses an existing compatible process. It returns a `TensorboardInfo` object describing how to reach the resulting TensorBoard process (whether new or reused). See docs for more details. Test Plan: End-to-end tests included. These appear to be lightly flaky: I ran bazel test //tensorboard:manager_e2e_test --runs_per_test=100 six times on each of Python 2 and 3, and experienced three total failures on Python 2 and zero on Python 3. On my machine, the test takes 14.7±0.9s to run on Python 2, and 17.9±1.0s to run on Python 3. To test manually, run `bazel build //tensorboard`, then add that binary to your path and head over to a Python REPL: $ export PATH="$(readlink -e ./bazel-bin/tensorboard):$PATH" $ python >>> from tensorboard import manager >>> r1 = manager.start(["--logdir", "~/tensorboard_data", "--port", "0"]) >>> type(r1) <class 'tensorboard.manager.StartLaunched'> >>> r2 = manager.start(["--logdir", "~/tensorboard_data", "--port", "0"]) >>> type(r2) <class 'tensorboard.manager.StartReused'> >>> r1.info == r2.info True >>> r1.info.port 39081 >>> import os >>> os.system("curl --silent localhost:39081 | tail -c 64") <tf-tensorboard use-hash brand="TensorBoard"></tf-tensorboard> 0 >>> manager.get_all() == [r1.info] True >>> os.kill(r1.info.pid, 15) >>> manager.get_all() == [] True wchargin-branch: manager-start
Summary: This commit adds a module `tensorboard.notebook` that can be loaded as a Jupyter extension, providing the `%tensorboard` cell magic to launch and display TensorBoard instances within a notebook. This code doesn’t have to be perfect right now; one main goal for this PR is to sync the structure of this code into google3, where it’s easier to work on and test Colab integration. The UI looks like this: ![Screenshot of a `%tensorboard` command in the Jupyter UI.][1] [1]: https://user-images.githubusercontent.com/4317806/52386325-7ae7eb80-2a3a-11e9-93ab-fc9a689de51c.png Test Plan: Currently, this requires a bit of path hackery to get working: 1. Run `bazel build //tensorboard` to build the main binary. 2. Run `which tensorboard` to find the `tensorboard` binary provided by your virtualenv, and hit it with `chmod -x` to make it not executable. 3. Run `export PATH="$(readlink -e ./bazel-bin/tensorboard):$PATH"` to add it to your path, and confirm that `which tensorboard` points to the built version rather than the version in your virtualenv. 4. Run `jupyter notebook` to start the Jupyter server. 5. Create a notebook and execute `%load_ext tensorboard.notebook` to load the extension; henceforth, `%tensorboard` should work until you restart the Jupyter kernel. (Step (2) is necessary because the `jupyter notebook` runtime adds the virtualenv _back_ to the front of your `PATH`. An alternative is to patch `os.environ["PATH"]` from within the Jupyter notebook.) After setting it up as above, the following makes a good test plan (assuming that you have no other TensorBoard instances running): - `%tensorboard --logdir ~/tb/mnist --port 6006` (should launch) - `%tensorboard --logdir ~/tb/mnist --port 6006` (should reuse) - `%tensorboard --logdir ~/tb/images_demo --port 6006` (should fail) - `%tensorboard --logdir ~/tb/images_demo --port 6007` (should launch) - `%tensorboard --logdir ~/tb/mnist --port 6006` (should reuse #1) - multiple `%tensorboard`s in a single cell: ```py for i in ("images_demo", "audio_demo"): %tensorboard --logdir ~/tb/$i --port 0 ``` - `from tensorboard import notebook` - `notebook.list()` (should list four instances) - `notebook.display(port=6006)` - `notebook.display(height=800)` Finally, if you skip (or revert) step (2) from the setup instructions, you can see the timeout behavior, because we’ll invoke the `tensorboard` provided by PyPI, which does not yet know how to write TensorboardInfo. wchargin-branch: notebook-jupyter
Summary: Per @manivaradarajan’s request, for consistency with other uses of `TensorBoard` as an identifier. Generated with: $ git grep --name-only -z TensorboardInfo | > xargs -0 sed -i -e 's/TensorboardInfo/TensorBoardInfo/g' $ git add -u Test Plan: $ git grep TensorboardInfo | wc -l 0 wchargin-branch: tensorboardinfo-capitalization
Summary: Within Google, the `//tensorboard:tensorboard` target is at a different path in the build workspace, so tests with a data dependency on it cannot find it at `./tensorboard/tensorboard`. However, the binary should always be in the same place relative to the test binary, so we now use that instead. Test Plan: Running `bazel test //tensorboard:manager_e2e_test` still works in Python 2 and 3. wchargin-branch: relative-data-path
Summary: This can take longer on Google-internal infrastructure in some cases. (Note that this commit will introduce a test size warning when the test takes less than a minute, which is common in the open-source version.) Test Plan: None. wchargin-branch: manager-e2e-moderate
PR #1750 added a new required prop for tf-graph-loader which broke all use cases of tf-graph-loader: e.g., debugger plugin, tf-graph-app, and all the custom tf-graph implementations. Instead of explicitly defining the provider, it looks appropriate to have the sane default to keep the same behavior we have had for many minor versions. Also made sure observers have explicit dependency on the prop.
Summary: This hooks up the `notebook` module to Colab. The Colab runtime sandboxes the contents of the output frame, but provides a service worker tunnel so that the frame may communicate with underlying VM. We take advantage of this by loading TensorBoard through a JavaScript shell that changes the document `baseURI` to proxy requests through this worker. Test Plan: Add `//tensorboard:notebook` to the deps of `build_pip_package`, then build the Pip package and manually upload it onto a public Colab instance. Then run: !pip uninstall -q -y tensorflow tf-nightly-2.0-preview !pip install -q tf-nightly-2.0-preview !pip uninstall -q -y tensorboard !pip install -q ./tensorboard-1.13.0a0-py3-none-any.whl %load_ext tensorboard.notebook %tensorboard --logdir ./tensorboard_data/mnist and watch TensorBoard come to life as desired. Googlers, see <http://cl/233129221> to test this against the internal version of Colab. wchargin-branch: notebook-colab-integration
Summary: This should have been in #2174. Test Plan: That unit tests pass is sufficient. We don’t have demos that exercise this endpoint. (The implementation is tested in `summary_v2_test`.) wchargin-branch: hparams-config-pb-export
Summary: Follow-up to #2118, surfaced in a thread on tensorflow-testing: <https://groups.google.com/a/tensorflow.org/d/msg/testing/Dh_kAR8omg8/Y5rLELCSAgAJ> These notebooks work fine in Colab, but apparently break in Jupyter. Test Plan Running `git grep 'load_ext.* "'` yielded two matches before this commit and zero matches after. wchargin-branch: load-ext-whitespaces
When a Python module imports tensorflow, it needs to explicitly define dependency to TensorFlow in the BUILD file. This unblocks internally sync.
bzl macros require docstring and missing it would violate the linter.
Summary: Previously, `hparams.summary` was available via a “fake” dependency of `:hparams_plugin`. We replace that fake dependency with a “public API” target, as used by Beholder and the projector. Test Plan: Cherry-pick #2188, then: ``` $ bazel run //tensorboard/pip_package:build_pip_package -- --no-smoke $ PATH_TO_PY3_WHEEL=... # last line of command output $ cd "$(mktemp -d)" $ virtualenv -q -p python3.5 ./ve $ . ./ve/bin/activate (ve) $ pip install -q tf-nightly-2.0-preview (ve) $ pip uninstall -q -y tb-nightly (ve) $ pip install "${PATH_TO_PY3_WHEEL}" (ve) $ python >>> from tensorboard.plugins.hparams import api >>> from tensorboard.plugins.hparams import plugin_data_pb2 >>> pb = api.hparams_pb({"optimizer": "adam", "magic": True}) >>> content = pb.value[0].metadata.plugin_data.content >>> plugin_data_pb2.HParamsPluginData.FromString(content) session_start_info { hparams { key: "magic" value { bool_value: true } } hparams { key: "optimizer" value { string_value: "adam" } } group_name: "18497c833ee7276fd662fe139cf96f01d33ed8a6a5a4f7b0999b3b5db9f706a6" start_time_secs: 1556910144.4714012 } ``` Also verified that none of the `srcs` of `:hparams_plugin` directly import `hparams.summary`. wchargin-branch: hparams-api-in-pip
Initial pull request for tf-graphics Mesh Visualization plugin. Note that part of client-side code of the plugin comes from tensorflow/graphics repo. Also plugin misses demo app which will be added later.
This change enables the new Mesh plugin for visualizing point cloud data.
There were two problems: 1. dataProvider is not available until component is attached 2. missing dependency to tf-mesh-loader in mesh-dashboard One minor feature improvements: - Runs selector now appears below the Point of view section and it now spans the entire view height.
* Created using Colaboratory * Created using Colaboratory * Upload images * Update image links * Resolve comments * Use tf 2.0 preview instead * Add prerequisites. * Update _book.yaml * update link * Update overview * Update trace viewer description * Fix typo
Summary: Resolves a docs bug noted here: <#2110 (comment)> Test Plan: Run this notebook in Colab, and observe that the resulting TensorBoard instance shows both the static configuration (note human-readable display names) and per-trial values:  Then, note that `git grep 'import_event' '*.ipynb'` has no matches as of this commit, and other uses of `git grep import_event` are fallbacks only. wchargin-branch: import-event
Summary: This commit adds `hparams` and `hparams_pb`, to write `SessionStartInfo` summaries for users who do not use the Keras callback. As with `hparams_config{,_pb}`, this “conceptually” supports notf, but there is still a transitive dependency through `metadata`. In the tests, we opt to duplicate some infrastructure code rather than abstracting over it. Compared to (e.g.) `scalar/summary_test`, our test matrix has an extra dimension: in addition to “V1 vs. V2” and “op vs. pb”, we have “`hparams` vs. `hparams_config`”. I couldn’t find an abstraction whose code reuse outweighed its extra complexity. These tests caught a bug present in the existing `summary.py` code, wherein `bool` hparam values are always serialized as `int`s, because in Python `bool` is a subtype of `int` and so one of the conditionals was dead. The new code properly emits `bool` hparams, and so is not strictly “compatible” with the old code. Test Plan: Unit tests included. The existing demo uses the Keras callback and so does not exercise this code. wchargin-branch: hparams-writing
`from tensorboard.plugins.mesh import summary` should exist in the pip package.
The test was timing out around in 0.1% runs with the small size.
Summary: Amends #2205 to be consistent with other plugins. Test Plan: Pip package smoke test still passes. wchargin-branch: mesh-library-target
Summary: When the `TF2_BEHAVIOR` environment variable is set, TensorFlow behaves as if `tf.compat.v1.enable_v2_behavior()` were invoked. In such a case, we should not run V1-only tests. We elect to test this by importing a private TensorFlow module rather than reading the environment variable directly. Both are undocumented, but the module import will give us a hard failure if it is changed, while the environment variable will silently change behavior. This fixes some Google-internal issues (see <http://cl/247294725>). It should be a no-op externally; we do not set this environment variable. Resolves #1996. Test Plan: Count the number of skipped tests before and after this change: ``` tests="$(bazel query " tests(rdeps( //tensorboard/..., set($(git grep -l run_v._only)) - //tensorboard/util:test_util.py )) ")" && bazel test ${tests} --test_output=all 2>&1 | sed -n -e 's/.*(skipped=\([0-9]\+\)).*/\1/p' | awk '{ c += $1 } END { print c }' ``` On TF 2.0, it prints `166` both before and after this change; on TF 1.x, it prints `21` both before and after this change. wchargin-branch: respect-tf2-behavior
Summary: Until now, the hparams tutorial notebook has directly manipulated protos and logged them with `experimental.write_raw_pb`. Now that the high-level APIs have merged, we can simplify the tutorial significantly. Test Plan: Uploading this notebook to Colab, resetting all runtimes, and pressing “Run All” shows that it behaves correctly. Staging to the internal dev site shows that the images and code render correctly. wchargin-branch: hparams-tutorial
A standalone demo application (as a Bazel target) will come soon, pending resolution of some dependency issues. Test Plan: Download the `.ipynb`, open it in https://colab.research.google.com, reset all runtimes, and run it from start to finish. Verify that an interactive mesh visualization appears in the last cell.
Summary: This changes the private `metadata` module to emit TensorBoard protos instead of TensorFlow protos, removing its dependency on TensorFlow. Its two clients are updated: we add a reinterpret-cast to one and remove the cast from the other. Test Plan: A new test target runs with `//tensorboard/compat:no_tensorflow`. Verified that this passes in a virtualenv that actually does not have TensorFlow installed, skipping only the expected six tests. Smoke test updated for notf builds; the new smoke test fails before this change. wchargin-branch: hparams-notf
Summary: The test as written may be flaky, according to @nfelt’s comment here: <#2188 (comment)> Test Plan: I couldn’t actually get the previous version of the test to fail (even with `--runs_per_test=1000`), but it still passes, so that’s nice. wchargin-branch: hparams-tests-explicit-flush
Summary: Per @lamberta via [the Google dev docs style guide][guide], we should use sentence case for titles and headers, and “prefer italics instead bold for emphasis” because “bold style tends to conflict with the h3/h4/h5 styles that devsite uses when skimming the page”. This commit replaces strong emphasis with emphasis, quotations, or code spans, as appropriate. Strong emphasis is retained for description list items and a `**NOTE**:` specifier (changed to `Note:` for consistency with [the same guide][guide]). This commit is easier to review with `--word-diff`. [guide]: https://developers.google.com/style/capitalization Test Plan: Check that all remaining `**`-sequences are intended: ``` $ grep -F '**' docs/r2/tensorboard_profiling_keras.ipynb | cut -c -40 "* **CPU**: CPU events are und "* **GPU**: GPU events are und "* **TensorFlow Runtime**: Run "**NOTE**: Please don't run mult ``` wchargin-branch: profile-unembolden
In order for WitWidget to live in a Jupyter notebook without interfering with other widgets that also use Polymer, we need to load it inside an iframe.
Simplify "inference address" mentions in tool/code to not be misleading.
Continuing work on CMLE mode for WitWidget, which has been rebranded to AI Platform by Cloud. Added support for xgboost models in AI Platform as well.
To support upcoming WIT integrations, made it possible to change the examples being shown by WitWidget through a method call.
Repro steps: 1. Select a run that has more than one tags 2. Select a tag with RunMetadata 3. Select a node 4. Select the default tag 5. Click nodes around and see JSEs We have nested dom-ifs to prevent `getTotalMicros` from being called. However, this is only effective when the DOM is not mounted but this assumption does not hold after first render (if=true). JSE = JavaScript Exception
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation for features / changes
Technical description of changes
Screenshots of UI changes
Detailed steps to verify changes work correctly (as executed by you)
Alternate designs / implementations considered