Skip to content

Outputs

Amro edited this page Jun 13, 2016 · 2 revisions

Once you Run the training, you can interactively see how the neural network gets trained over the iterations. This is seen in the Outputs section and is shown below:

outputs

This pane interactively shows you the positive and negative training and test data in a scatter plot as blue and orange points respectively. There is a checkbox labeled Show test data that shows you what the test data was inside this pane as darker and thicker points. Also, the color scheme of the background of this graph is for determining the outputs for new values presented to the neural network that it has not seen before. This data is uniformly spaced between [-6,+6] for both dimensions and the color tells you how each point in the grid was classified as. Orange values denote negative values and blue values denote positive values. White values denote the decision boundary where the neural network output is close to 0. Therefore, the optimal coloring scheme should cluster all blue points to be within a region of blue and all orange points to be within a region of orange.

There are also performance curves displayed at the top of this graph that plot the training and test loss at each iteration of training. The loss is defined as the mean sum of squared differences between the predicted and actual outputs for both the training and test data at a particular iteration. The training loss curve is seen in light gray while the test loss curve is seen in dark grey. There are also numerical measures that are displayed above the performance curves that give you the training and test loss for the current iteration.

You may also specify Discretize output which sets a threshold on the activation function (its sign) to give you a hard decision on whether the input is positive or negative, rather than having a range of positive and negative values.

Clone this wiki locally