Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 37 additions & 21 deletions doc/MLPClassifier.rst
Original file line number Diff line number Diff line change
@@ -1,69 +1,85 @@
:digest: Classification with a multi-layer perceptron
:species: data
:sc-categories: Machine learning
:sc-related: Classes/FluidMLPRegressor, Classes/FluidDataSet
:sc-related: Classes/FluidMLPRegressor, Classes/FluidDataSet, Classes/FluidLabelSet
:see-also:
:description: Perform classification between a :fluid-obj:`DataSet` and a :fluid-obj:`LabelSet` using a Multi-Layer Perception neural network.
:description:

Perform classification between a :fluid-obj:`DataSet` and a :fluid-obj:`LabelSet` using a Multi-Layer Perception neural network.

:control hidden:
:discussion:

An ``Classes/Array`` that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
For a thorough explanation of how this object works and more information on the parameters, visit the page on **MLP Training** (https://learn.flucoma.org/learn/mlp-training) and **MLP Parameters** (https://learn.flucoma.org/learn/mlp-parameters).

:control hiddenLayers:

An array of numbers that specifies the internal structure of the neural network. Each number in the list represents one hidden layer of the neural network, the value of which is the number of neurons in that layer. Changing this will reset the neural network, clearing any learning that has happened.

:control activation:

The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1).
An integer indicating which activation function each neuron in the hidden layer(s) will use. Changing this will reset the neural network, clearing any learning that has happened. The options are:

:enum:

:0:
**identity** (the output range can be any value)

:1:
**sigmoid** (the output will always range be greater than 0 and less than 1)

:2:
**relu** (the output will always be greater than or equal to 0)

:3:
**tanh** (the output will always be greater than -1 and less than 1)

:control maxIter:

The maximum number of iterations to use in training.
The number of epochs to train for when ``fit`` is called on the object. An epoch is consists of training on all the data points one time.

:control learnRate:

The learning rate of the network. Start small, increase slowly.
A scalar for indicating how much the neural network should adjust its internal parameters during training. This is the most important parameter to adjust while training a neural network.

:control momentum:

The training momentum, default 0.9
A scalar that applies a portion of previous adjustments to a current adjustment being made by the neural network during training.

:control batchSize:

The training batch size.
The number of data points to use in between adjustments of the MLP's internal parameters during training.

:control validation:

The fraction of the DataSet size to hold back during training to validate the network against.

A percentage (represented as a decimal) of the data points to randomly select, set aside, and not use for training (this "validation set" is reselected on each ``fit``). These points will be used after each epoch to check how the neural network is performing. If it is found to be no longer improving, training will stop, even if a ``fit`` has not reached its ```maxIter`` number of epochs.

:message fit:

:arg sourceDataSet: Source data

:arg targetLabelSet: Target data

:arg action: Function to run when training is complete
:arg targetLabelSet: Target labels

Train the network to map between a source :fluid-obj:`DataSet` and a target :fluid-obj:`LabelSet`
:arg action: Function to run when complete. This function will be passed the current error as its only argument.

Train the network to map between a source :fluid-obj:`DataSet` and target :fluid-obj:`LabelSet`

:message predict:

:arg sourceDataSet: Input data

:arg targetLabelSet: Output data
:arg targetLabelSet: :fluid-obj:`LabelSet` to write the predicted labels into

:arg action: Function to run when complete

Apply the learned mapping to a :fluid-obj:`DataSet` (given a trained network)
Predict labels for a :fluid-obj:`DataSet` (given a trained network)

:message predictPoint:

:arg sourceBuffer: Input point

:arg targetBuffer: Output point

:arg action: A function to run when complete
:arg action: A function to run when complete. This function will be passed the predicted label.

Apply the learned mapping to a single data point in a |buffer|
Predict a label for a single data point in a |buffer|

:message clear:

Expand Down
48 changes: 33 additions & 15 deletions doc/MLPRegressor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,58 +3,76 @@
:sc-categories: Machine learning
:sc-related: Classes/FluidMLPClassifier, Classes/FluidDataSet
:see-also:
:description: Perform regression between :fluid-obj:`DataSet`\s using a Multi-Layer Perception neural network.
:description:

Perform regression between :fluid-obj:`DataSet`\s using a Multi-Layer Perception neural network.

:control hidden:
:discussion:

An ``Classes/Array`` that gives the sizes of any hidden layers in the network (default is two hidden layers of three units each).
For a thorough explanation of how this object works and more information on the parameters, visit the page on **MLP Training** (https://learn.flucoma.org/learn/mlp-training) and **MLP Parameters** (https://learn.flucoma.org/learn/mlp-parameters).

:control hiddenLayers:

An array of numbers that specifies the internal structure of the neural network. Each number in the list represents one hidden layer of the neural network, the value of which is the number of neurons in that layer. Changing this will reset the neural network, clearing any learning that has happened.

:control activation:

The activation function to use for the hidden layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1)
An integer indicating which activation function each neuron in the hidden layer(s) will use. Changing this will reset the neural network, clearing any learning that has happened. The options are:

:enum:

:0:
**identity** (the output range can be any value)

:1:
**sigmoid** (the output will always range be greater than 0 and less than 1)

:2:
**relu** (the output will always be greater than or equal to 0)

:3:
**tanh** (the output will always be greater than -1 and less than 1)

:control outputActivation:

The activation function to use for the output layer units. Beware of the permitted ranges of each: relu (0->inf), sigmoid (0->1), tanh (-1,1)
An integer indicating which activation function each neuron in the output layer will use. Options are the same as ``activation``. Changing this will reset the neural network, clearing any learning that has happened.

:control tapIn:

The layer whose input is used to predict and predictPoint. It is 0 counting, where the default of 0 is the input layer, and 1 would be the first hidden layer, and so on.
The index of the layer to use as input to the neural network for ``predict`` and ``predictPoint`` (zero counting). The default of 0 is the first layer (the original input layer), 1 is the first hidden layer, etc. This can be used to access different parts of a trained neural network such as the encoder or decoder of an autoencoder (https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726).

:control tapOut:

The layer whose output to return. It is counting from 0 as the input layer, and 1 would be the first hidden layer, and so on. The default of -1 is the last layer of the whole network.
The index of the layer to use as output of the neural network for ``predict`` and ``predictPoint`` (zero counting). The default of -1 is the last layer (the original output layer). This can be used to access different parts of a trained neural network such as the encoder or decoder of an autoencoder (https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726).

:control maxIter:

The maximum number of iterations to use in training.
The number of epochs to train for when ``fit`` is called on the object. An epoch is consists of training on all the data points one time.

:control learnRate:

The learning rate of the network. Start small, increase slowly.
A scalar for indicating how much the neural network should adjust its internal parameters during training. This is the most important parameter to adjust while training a neural network.

:control momentum:

The training momentum, default 0.9
A scalar that applies a portion of previous adjustments to a current adjustment being made by the neural network during training.

:control batchSize:

The training batch size.
The number of data points to use in between adjustments of the MLP's internal parameters during training.

:control validation:

The fraction of the DataSet size to hold back during training to validate the network against.

A percentage (represented as a decimal) of the data points to randomly select, set aside, and not use for training (this "validation set" is reselected on each ``fit``). These points will be used after each epoch to check how the neural network is performing. If it is found to be no longer improving, training will stop, even if a ``fit`` has not reached its ```maxIter`` number of epochs.

:message fit:

:arg sourceDataSet: Source data

:arg targetDataSet: Target data

:arg action: Function to run when training is complete

:arg action: Function to run when complete. This function will be passed the current error as its only argument.
Train the network to map between a source and target :fluid-obj:`DataSet`

:message predict:
Expand Down
Loading