Skip to content

5. Evaluating a Skill

Gabi Dobocan edited this page Jan 11, 2019 · 1 revision

To preview the generated utterances for a skill, after building it (or running the gendef task) run

niles dev:preview yourskillname

To use the generated testing data to evaluate model accuracy / confidence, run

niles dev:eval yourskillname

You'll find the following evaluation artefacts in the evaluation directory:

  • A confusion matrix in png format;
  • A histogram in png format;
  • A json containing all identified errors from running the training data on the model.
Clone this wiki locally