Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Add an inference script providing both accuracy and benchmark result for original wide_n_deep example #13895

Merged
merged 7 commits into from
Feb 16, 2019

Conversation

juliusshufan
Copy link
Contributor

@juliusshufan juliusshufan commented Jan 16, 2019

Description

Add a script for inference based on the saved model and parameter files achieved during training.
This script can provide either accuracy or benchmark results with specified parameters.
@TaoLv @pengzhao-intel

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

a new script file example/sparse/wide_deep/inference.py
Readme also slightly revised.

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@juliusshufan juliusshufan changed the title Add a inference script can provide both accuracy and benchmark result Add a inference script providing both accuracy and benchmark result for original wide_n_deep example Jan 16, 2019
help='the model prefix')

# Related to feature engineering, please see preprocess in data.py
ADULT = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is common parts with training. Could we put in some util/model file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move to config.py

help='loading the params of the corresponding training epoch.')
parser.add_argument('--batch-size', type=int, default=100,
help='number of examples per batch')
parser.add_argument('--accuracy', action='store_true', default=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have accuracy=True by default. Use --benchmark to do performance benchmarking as what we do in other scripts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the comments, modified accordingly.

parser.add_argument('--verbose', action='store_true', default=False,
help='accurcy for each batch will be logged if set')
parser.add_argument('--cuda', action='store_true', default=False,
help='Train on GPU with CUDA')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Train? BTW, we use --gpu in other examples.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, now use '--gpu', same change also applied to train.py

@stu1130
Copy link
Contributor

stu1130 commented Jan 16, 2019

@mxnet-label-bot add [pr-awaiting-review]
Thanks for your contribution @juliusshufan

@marcoabreu marcoabreu added the pr-awaiting-review PR is waiting for code review label Jan 16, 2019
@@ -0,0 +1,121 @@
# Licensed to the Apache Software Foundation (ASF) under one
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add python3 shebang?

Copy link
Contributor Author

@juliusshufan juliusshufan Jan 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Larroy,

thanks for review, I think this script can work with Python2, it might be fine without python3 shebang?

Thanks.

'hidden_units': [8, 50, 100],
}

if __name__ == '__main__':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we wrap it into a main function as:

if __name__ == '__main__':
    sys.exit(main())

Copy link
Contributor Author

@juliusshufan juliusshufan Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Larroy,

Seems sys only supported by python3, which may introduce incompatibility to python2, meanwhile, I just follow the "structure" of exiting train.py to implement the inference.py. thanks

@juliusshufan
Copy link
Contributor Author

@szha and @TaoLv @pengzhao-intel @larroy may I have your approval or comments on this PR?
thanks.

@juliusshufan juliusshufan changed the title Add a inference script providing both accuracy and benchmark result for original wide_n_deep example Add an inference script providing both accuracy and benchmark result for original wide_n_deep example Jan 30, 2019
help='number of examples per batch')
parser.add_argument('--accuracy', action='store_true', default=True,
help='run the script for inference accuracy, not set for benchmark.')
parser.add_argument('--benchmark', action='store_true', default=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about only have --benchmark here and remove --accuracy? If --benchmark is given in the command line, it will do benchmarking with dummy data, otherwise the script will run with real data and print the final accuracy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for review and agree, revised accordingly.

@TaoLv
Copy link
Member

TaoLv commented Jan 30, 2019

@ZiyueHuang Could you help to review the change?

Copy link
Contributor

@vandanavk vandanavk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please execute pylint on these files using ci/other/pylintrc in incubator-mxnet folder.
Something like this
pylint --rcfile=ci/other/pylintrc --ignore-patterns="..so$$,..dll$$,._.dylib$$" example/sparse/wide_deep/*.py

data_iter = iter(eval_data)
if benchmark:
logging.info('Inference benchmark started ...')
nbatch = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: nbatch =0 can be moved before the if...else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, revised accordingly

Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@juliusshufan please resolve other reviewers' comments.

Copy link
Member

@TaoLv TaoLv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juliusshufan Thank you. LGTM now.

@TaoLv
Copy link
Member

TaoLv commented Feb 13, 2019

@vandanavk @larroy Please confirm if your comments are addressed.

@ankkhedia
Copy link
Contributor

@vandanavk @larroy Could you please check if your comments are resolved.

Copy link
Contributor

@ankkhedia ankkhedia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@TaoLv
Copy link
Member

TaoLv commented Feb 16, 2019

Seems there is no other comments. Merging now.

@TaoLv TaoLv merged commit 8bfbb7d into apache:master Feb 16, 2019
stephenrawls pushed a commit to stephenrawls/incubator-mxnet that referenced this pull request Feb 16, 2019
…for original wide_n_deep example (apache#13895)

* Add a inference script can provide both accuracy and benchmark result

* minor changes

* minor fix to use keep similar coding style as other examples

* fix typo

* remove code redundance and other minor changes

* Addressing review comments and minor pylint fix

* remove parameter 'accuracy' to make logic simple
jessr92 pushed a commit to jessr92/incubator-mxnet that referenced this pull request Feb 19, 2019
…for original wide_n_deep example (apache#13895)

* Add a inference script can provide both accuracy and benchmark result

* minor changes

* minor fix to use keep similar coding style as other examples

* fix typo

* remove code redundance and other minor changes

* Addressing review comments and minor pylint fix

* remove parameter 'accuracy' to make logic simple
drivanov pushed a commit to drivanov/incubator-mxnet that referenced this pull request Mar 4, 2019
…for original wide_n_deep example (apache#13895)

* Add a inference script can provide both accuracy and benchmark result

* minor changes

* minor fix to use keep similar coding style as other examples

* fix typo

* remove code redundance and other minor changes

* Addressing review comments and minor pylint fix

* remove parameter 'accuracy' to make logic simple
vdantu pushed a commit to vdantu/incubator-mxnet that referenced this pull request Mar 31, 2019
…for original wide_n_deep example (apache#13895)

* Add a inference script can provide both accuracy and benchmark result

* minor changes

* minor fix to use keep similar coding style as other examples

* fix typo

* remove code redundance and other minor changes

* Addressing review comments and minor pylint fix

* remove parameter 'accuracy' to make logic simple
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
…for original wide_n_deep example (apache#13895)

* Add a inference script can provide both accuracy and benchmark result

* minor changes

* minor fix to use keep similar coding style as other examples

* fix typo

* remove code redundance and other minor changes

* Addressing review comments and minor pylint fix

* remove parameter 'accuracy' to make logic simple
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants