Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Merge changes
Browse files Browse the repository at this point in the history
  • Loading branch information
anirudh2290 committed Jul 4, 2019
1 parent 6215eef commit 93837aa
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions docs/tutorials/amp/amp_tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,14 +253,10 @@ We got 60% speed increase from 3 additional lines of code!

## Inference with AMP

<<<<<<< HEAD
To do inference with mixed precision for a trained model in FP32, you can use the conversion APIs: `amp.convert_model` for symbolic model and `amp.convert_hybrid_block` for gluon models. The conversion APIs will take the FP32 model as input and will return a mixed precision model, which can be used to run inference.
Below, we demonstrate for a gluon model and a symbolic model:
- Conversion from FP32 model to mixed precision model.
- Run inference on the mixed precision model.
=======
To do inference with mixed precision for a trained model in FP32, you can use the conversion APIs: `amp.convert_model` for symbolic model and `amp.convert_hybrid_block` for gluon models. The conversion APIs will take the FP32 model as input and will return a mixed precision model, which can be used to run inference. Below, we demonstrate for a gluon model and a symbolic model: 1. Conversion from FP32 model to mixed precision model 2. Run inference on the mixed precision model.
>>>>>>> faccc59bc0ed7e22933c1f86f3aabac6f13fe1a9

```python
with mx.Context(mx.gpu(0)):
Expand Down

0 comments on commit 93837aa

Please sign in to comment.