Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1121] Example to demonstrate the inference workflow using RNN #13680

Merged
merged 35 commits into from
Feb 7, 2019

Conversation

leleamol
Copy link
Contributor

@leleamol leleamol commented Dec 18, 2018

Description

Example to demonstrate the inference workflow using RNN and C++ APIs

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • [y] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • [y] Changes are complete (i.e. I finished coding on this PR)
  • [y] All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • [y] Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • [y] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

The example demonstrates inference workflow using pre-trained RNN network and C++ APIs.

@Roshrini
Copy link
Member

@mxnet-label-bot Add [Example, pr-awaiting-review]

@marcoabreu marcoabreu added Example pr-awaiting-review PR is waiting for code review labels Dec 19, 2018
@Roshrini
Copy link
Member

Roshrini commented Jan 2, 2019

@nswamy @eric-haibin-lin For review

Copy link
Contributor

@lupesko lupesko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a bunch of comments.
Beyond them:

  • I suggest you add a paragraph to the readme explaining just a bit about the model, at a high level: what is the input for inference, what is the output, and how it was trained.
  • Write down an example of the model's output on a given input as part of the readme, as a simple way to explain what the model does and what the readers should expect.

cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
@lupesko
Copy link
Contributor

lupesko commented Jan 8, 2019

@leleamol - I left a few comments. On the PR description, I suggest you remove the bullets/items that are not relevant for this PR (under Essentials and Changes).

@leleamol
Copy link
Contributor Author

leleamol commented Jan 9, 2019

@lupesko
Thanks for the review comments. I have updated the code as per your comments.

Also I have edited the README files with some details about the model as well as instruction to build cpp package before building the examples.

@leleamol
Copy link
Contributor Author

leleamol commented Jan 9, 2019

@sandeep-krishnamurthy

Copy link
Contributor

@sandeep-krishnamurthy sandeep-krishnamurthy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this, few high level comments for now.

Are we adding this example to nightly tests? so we don't have this example broken and unnoticed in future.


The example uses a pre-trained RNN model that is trained with the dataset containing speeches given by Obama.
The model consists of :
- Embedding Layer with the size of embedding to be 650
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A simple image will be helpful

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can get the pdf version of model (generated using visualiation API). Need some suggestions from @aaronmarkham to embed them in README.

@@ -39,3 +39,48 @@ Alternatively, The script [unit_test_inception_inference.sh](<https://github.com
```
./unit_test_inception_inference.sh
```

### [simple_rnn.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/simple_rnn.cpp>)
This example demonstrates sequence prediction workflow with pre-trained RNN model using MXNet C++ API. The purpose of this example is to demonstrate how a pre-trained RNN model can be loaded and used to generate an output sequence using C++ API.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably can be made more simple for making it easier to read and follow.
@aaronmarkham - Can you please help us here with the documentation? Thanks.

cpp-package/example/inference/README.md Outdated Show resolved Hide resolved

The example will output the seqence of 35 words as follows:
```
[waters elected Amendment Amendment Amendment Amendment retirement maximize maximize maximize acr citi sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio sophisticatio ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the output bad? Can we give example of a well trained model?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to get better output by changing model hyperparameters but couldn't get it. It would require a good amount of input data processing as well. All these efforts would require dedicated time and out of scope for this example. The example aims towards loading the model and running forward pass. Improving on the model would be a separate task.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am working on implementing the RNN model using C++ API. I can work on improving the accuracy of that model and use it in this example later.

throw std::runtime_error("Model parameters does not exist");
}
LG << "Loading the model parameters from " << model_parameters_file << std::endl;
std::map<std::string, NDArray> paramters;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: spell check

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved

The model files can be found here.
- [obama-speaks-symbol.json](<https://s3.amazonaws.com/mxnet-cpp/RNN_model/obama-speaks-symbol.json>)
- [obama-speaks-0100.params](<https://s3.amazonaws.com/mxnet-cpp/RNN_model/obama-speaks-0100.params>)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why in mxnet-cpp bucket why not in mxnet pretrained models bucket?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current bucket mxnet-cpp is not a public by default but the contents are made publicly readable. This is similar to mxnet-scala bucket used for scala examples.

cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
@leleamol
Copy link
Contributor Author

@sandeep-krishnamurthy as per the comment, I have added these examples to ci_test.sh file that runs in nightly builds.



int main(int argc, char** argv) {
std::string model_file_json = "./obama-speaks-symbol.json";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the input file paths be hardcoded here ?
Can they be passed as parameters to the main method in argv ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, saw later that these are passed to the S3 Bucket to download.

Copy link
Contributor

@IvyBazan IvyBazan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see suggested improvements for clarity

cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/README.md Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
cpp-package/example/inference/simple_rnn.cpp Outdated Show resolved Hide resolved
@Isa-rentacs
Copy link

@leleamol
Regarding variable length inputs, isn't it possible to update args_map["data0"] depending on the length of the input sentence?

@leleamol
Copy link
Contributor Author

@szha @Isa-rentacs @ddavydenko @lupesko @sandeep-krishnamurthy

I have updated the example that can handle the variable length input. The readme and unit test files are also updated accordingly.

@IvyBazan
Copy link
Contributor

LGTM!

Executor* executor = net.SimpleBind(global_ctx, args_map, std::map<std::string, NDArray>(),
std::map<std::string, OpReqType>(), aux_map);
executor_buckets[bucket] = executor;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't share the memory across executors, but will allocate excessive space from which only 1 / bucket_keys.Size() is used at any time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha ,

I updated the example to create shared executors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have to make sure that the master executor is created with the largest bucket size? If not, wouldn't the other executors need to allocate additional memory since the master executor only allocated enough for the smaller shape?

It looks like you are looping in whatever order bucket_keys is, and it looks like you are starting from smallest and going to largest bucket size.

(@szha -- you might know better than me. I know in the executor Reshape() call this is true. I would assume it must be true in the BindEX call that is being made here, but not 100% sure)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shared executors that I am creating are for the keys smaller to largest.
However, I noticed that the results do not change or example does not crash even if I reverse the order.
I had single stepped through the code, the GraphExecutor::InitDataEntryMemory() uses the shared memory from the given executor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leleamol -- the question I had is not weather it will effect the correctness or cause your program to crash. I am wondering weather the current order ends up allocating more memory than you need.

I don't know for sure, but I suspect if you allocate the largest executor first, then all the rest of the executors don't have to allocate extra memory and they just share memory from the largest executor.

However, I would assume that if the smallest executor is the "master" executor, then all other executors would have to allocate extra memory because the master executor did not allocate enough to satisfy any of the other executors.

Can you re-run your test w/ changing the order, and look closely at total gpu memory allocated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stephenrawls
I tried running it on CPU and I see an improvement in the memory consumption when the order is reveresed. That is creating the executor with largest bucket first.
I have updated the PR with that change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for confirming.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just looked at your fix though.

Instead of changing the order of your default array, you should probably just have a check to see what the max element of your array is, and allocate that first.

Otherwise you have a hidden assumption that the bucket sizes should be in a specific order, and it is easy for users not to realize that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stephenrawls
I agree, it will imply that the order is important.
Fixed the code in latest update.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, it looks ready to go to me. Thanks. 👍

Copy link

@Isa-rentacs Isa-rentacs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, again thank you for working on this 👍

cpp-package/example/inference/sentiment_analysis_rnn.cpp Outdated Show resolved Hide resolved
@vdantu
Copy link
Contributor

vdantu commented Jan 31, 2019

Talked to @leleamol . This PR Fixes #12734

@stephenrawls
Copy link
Contributor

Looks good to me, thanks for the example. Made a few small comments in the code about sharing executor memory, and extracting model output.

It would probably be nice to showcase batching too, but I know that complicates the current example code & bucketing executor, so not really a requirement! Thanks again for getting an example of doing inference in the c++ api.

* If the next larger bucket does not exist, function looks for the previous bucket key
* that is lesser than given num_words.
*/
int Predictor::GetClosestBucketKey(int num_words) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that you can simply use lower_bound and return the largest key if not found.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use lower_bound, we will always find a bucket that is smaller than the input word length (unless the input word length is smaller than the smallest bucket size). In that case, we will trim the input line and may lose the information. Therefore, I chose to find the next largest bucket so that we will be able to consider all the words in input sentence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use lower_bound, we will always find a bucket that is smaller than the input word length

I'm not sure if this is true. From cpprefenrece on std::map::lower_bound:

Returns an iterator pointing to the first element that is not less than key.

In your case the key is input word length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha I agree, with lower_bounds we will get matching or a smaller bucket key. My concern is that, we will always find such bucket and we will trim the input sequence. In that process, we would lose some of the information. Will it be acceptable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I read the document wrong, it won't get you smaller key. lower_bound returns the first element (i.e. bucket key) that's no less than (i.e. larger or equal to) the key (i.e. input sequence length).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For input sequence that's larger than the largest key, clipping is necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szha Fixed the logic in the latest update. Getting the lower bound is sufficient and accurate.
Thanks!

@szha szha dismissed their stale review February 6, 2019 18:54

The concern around sharing memory across executors is addressed.

@sandeep-krishnamurthy
Copy link
Contributor

Thanks Amol and all reviewers for this contribution.
Merging the changes as all reviewer comments are addressed and tests have passed.

@sandeep-krishnamurthy sandeep-krishnamurthy merged commit febbdd4 into apache:master Feb 7, 2019
stephenrawls pushed a commit to stephenrawls/incubator-mxnet that referenced this pull request Feb 16, 2019
…pache#13680)

* [MXNET-1121] Example to demonstrate the inference workflow using RNN

* Addressed the review comments. Updated the ReadMe files.

* Removed the unnecessary creation of NDArray

* Added the unit tests to nightly tests to catch the failure.

* Updated the makefiles and unit tests so that the examples are built and tested in nightly

* Added the visual representation of the model and fixed the CI failure.

* Added the missing pdf file.

* Fixing the broken ci_test.sh

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Applying unresolved changes to README file.

* Fixing the CI build failure.

* Updated the RNN example from sequence generation to sentiment analysis

* Updated the readme files. Updated the example to use trained model and updated the unit test.

* Addressed the review comment to increase the default sequence length. Added the examples with inputs of various lengths.

* Updated the example to handle variable length input. Updated the readme and unit test files accordingly.

* Updated the example to share the memory between executors by createing shared executors.

* Updated the creation of executors from largest to smallest bucket key

* Creating the executor for the highest bucket key.

* Updated the unit test to check for the results in a range and modified the function name to be consistent with others.

* Fixed the logic to find the right bucket.
vdantu pushed a commit to vdantu/incubator-mxnet that referenced this pull request Mar 31, 2019
…pache#13680)

* [MXNET-1121] Example to demonstrate the inference workflow using RNN

* Addressed the review comments. Updated the ReadMe files.

* Removed the unnecessary creation of NDArray

* Added the unit tests to nightly tests to catch the failure.

* Updated the makefiles and unit tests so that the examples are built and tested in nightly

* Added the visual representation of the model and fixed the CI failure.

* Added the missing pdf file.

* Fixing the broken ci_test.sh

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Applying unresolved changes to README file.

* Fixing the CI build failure.

* Updated the RNN example from sequence generation to sentiment analysis

* Updated the readme files. Updated the example to use trained model and updated the unit test.

* Addressed the review comment to increase the default sequence length. Added the examples with inputs of various lengths.

* Updated the example to handle variable length input. Updated the readme and unit test files accordingly.

* Updated the example to share the memory between executors by createing shared executors.

* Updated the creation of executors from largest to smallest bucket key

* Creating the executor for the highest bucket key.

* Updated the unit test to check for the results in a range and modified the function name to be consistent with others.

* Fixed the logic to find the right bucket.
@leleamol
Copy link
Contributor Author

@mxnet-label-bot remove [pr-awaiting-review]

@marcoabreu marcoabreu removed the pr-awaiting-review PR is waiting for code review label Jun 10, 2019
@leleamol leleamol deleted the rnn-inference branch June 10, 2019 18:13
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
…pache#13680)

* [MXNET-1121] Example to demonstrate the inference workflow using RNN

* Addressed the review comments. Updated the ReadMe files.

* Removed the unnecessary creation of NDArray

* Added the unit tests to nightly tests to catch the failure.

* Updated the makefiles and unit tests so that the examples are built and tested in nightly

* Added the visual representation of the model and fixed the CI failure.

* Added the missing pdf file.

* Fixing the broken ci_test.sh

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/README.md

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Update cpp-package/example/inference/simple_rnn.cpp

Co-Authored-By: leleamol <[email protected]>

* Applying unresolved changes to README file.

* Fixing the CI build failure.

* Updated the RNN example from sequence generation to sentiment analysis

* Updated the readme files. Updated the example to use trained model and updated the unit test.

* Addressed the review comment to increase the default sequence length. Added the examples with inputs of various lengths.

* Updated the example to handle variable length input. Updated the readme and unit test files accordingly.

* Updated the example to share the memory between executors by createing shared executors.

* Updated the creation of executors from largest to smallest bucket key

* Creating the executor for the highest bucket key.

* Updated the unit test to check for the results in a range and modified the function name to be consistent with others.

* Fixed the logic to find the right bucket.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.