Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Updated / Deleted some examples #12968

Merged
merged 3 commits into from
Nov 2, 2018
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion example/multivariate_time_series/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
- This repo contains an MXNet implementation of [this](https://arxiv.org/pdf/1703.07015.pdf) state of the art time series forecasting model.
- You can find my blog post on the model [here](https://opringle.github.io/2018/01/05/deep_learning_multivariate_ts.html)

- A Gluon implementation is available [here](https://github.com/safrooze/LSTNet-Gluon)

![](./docs/model_architecture.png)

## Running the code
Expand All @@ -22,7 +24,7 @@

## Hyperparameters

The default arguements in `lstnet.py` achieve equivolent performance to the published results. For other datasets, the following hyperparameters provide a good starting point:
The default arguements in `lstnet.py` achieve equivalent performance to the published results. For other datasets, the following hyperparameters provide a good starting point:

- q = {2^0, 2^1, ... , 2^9} (1 week is typical value)
- Convolutional num filters = {50, 100, 200}
Expand Down
1 change: 0 additions & 1 deletion example/named_entity_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ To reproduce the preprocessed training data:

1. Download and unzip the data: https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus/downloads/ner_dataset.csv
2. Move ner_dataset.csv into `./data`
3. create `./preprocessed_data` directory
3. `$ cd src && python preprocess.py`

To train the model:
Expand Down
2 changes: 1 addition & 1 deletion example/named_entity_recognition/src/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def load_obj(name):
with open(name + '.pkl', 'rb') as f:
return pickle.load(f)

tag_dict = load_obj("../preprocessed_data/tag_to_index")
tag_dict = load_obj("../data/tag_to_index")
not_entity_index = tag_dict["O"]

def classifer_metrics(label, pred):
Expand Down
2 changes: 1 addition & 1 deletion example/named_entity_recognition/src/ner.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@

parser = argparse.ArgumentParser(description="Deep neural network for multivariate time series forecasting",
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--data-dir', type=str, default='../preprocessed_data',
parser.add_argument('--data-dir', type=str, default='../data',
help='relative path to input data')
parser.add_argument('--output-dir', type=str, default='../results',
help='directory to save model files to')
Expand Down
2 changes: 1 addition & 1 deletion example/nce-loss/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The dataset used in the following examples is [text8](http://mattmahoney.net/dc/
* word2vec.py: a CBOW word2vec example using nce loss. You need to [download the text8 dataset](#dataset-download) before running this script. Command to start training on CPU (pass -g for training on GPU):

```
python word2vec.py
python wordvec.py

```

Expand Down
84 changes: 0 additions & 84 deletions example/numpy-ops/numpy_softmax.py

This file was deleted.

86 changes: 0 additions & 86 deletions example/onnx/super_resolution.py

This file was deleted.

37 changes: 0 additions & 37 deletions example/python-howto/README.md

This file was deleted.

76 changes: 0 additions & 76 deletions example/python-howto/data_iter.py

This file was deleted.

39 changes: 0 additions & 39 deletions example/python-howto/debug_conv.py

This file was deleted.

Loading