| Procedure | Location | Procedure Type | Description |
|---|---|---|---|
| accuracy | mod_network | Function | Given input x and output y, evaluates the position of the maximum value of the output and returns the number of matches relative to the size of the dataset. |
| activation_function | mod_activation | Interface | |
| array1d | mod_layer | Interface | |
| array1d_constructor | mod_layer | Function | Overloads the default type constructor. |
| array2d | mod_layer | Interface | |
| array2d_constructor | mod_layer | Function | Overloads the default type constructor. |
| backprop | mod_network | Subroutine | Applies a backward propagation through the network and returns the weight and bias gradients. |
| constructor | mod_layer | Function | Layer class constructor. this_size is the number of neurons in the layer. next_size is the number of neurons in the next layer, used to allocate the weights. |
| db_co_sum | mod_layer | Subroutine | Performs a collective sum of bias tendencies. |
| db_init | mod_layer | Subroutine | Initialises biases structure. |
| digits | mod_mnist | Function | Returns an array of 10 reals, with zeros everywhere and a one corresponding to the input number, for example: digits(0) = [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.] digits(1) = [0., 1., 0., 0., 0., 0., 0., 0., 0., 0.] digits(6) = [0., 0., 0., 0., 0., 0., 1., 0., 0., 0.] |
| dw_co_sum | mod_layer | Subroutine | Performs a collective sum of weights tendencies. |
| dw_init | mod_layer | Subroutine | Initialises weights structure. |
| fwdprop | mod_network | Subroutine | Performs the forward propagation and stores arguments to activation functions and activations themselves for use in backprop. |
| gaussian | mod_activation | Function | Gaussian activation function. |
| gaussian_prime | mod_activation | Function | First derivative of the Gaussian activation function. |
| init | mod_network | Subroutine | Allocates and initializes the layers with given dimensions dims. |
| label_digits | mod_mnist | Function | Converts an array of MNIST labels into a form that can be input to the network_type instance. |
| layer_type | mod_layer | Interface | |
| load | mod_network | Subroutine | Loads the network from file. |
| load_mnist | mod_mnist | Subroutine | Loads the MNIST dataset into arrays. |
| loss | mod_network | Function | Given input x and expected output y, returns the loss of the network. |
| net_constructor | mod_network | Function | Network class constructor. Size of input array dims indicates the total number of layers (input + hidden + output), and the value of its elements corresponds the size of each layer. |
| network_type | mod_network | Interface | |
| output_batch | mod_network | Function | Use forward propagation to compute the output of the network. This specific procedure is for a batch of 1-d input data. |
| output_single | mod_network | Function | Use forward propagation to compute the output of the network. This specific procedure is for a single sample of 1-d input data. |
| print_image | mod_mnist | Subroutine | Prints a single image and label to screen. |
| randn | mod_random | Interface | |
| randn1d | mod_random | Function | Generates n random numbers with a normal distribution. |
| randn2d | mod_random | Function | Generates m x n random numbers with a normal distribution. |
| read_binary_file | mod_io | Interface | |
| read_binary_file_1d | mod_io | Subroutine | |
| read_binary_file_2d | mod_io | Subroutine | |
| relu | mod_activation | Function | REctified Linear Unit (RELU) activation function. |
| relu_prime | mod_activation | Function | First derivative of the REctified Linear Unit (RELU) activation function. |
| save | mod_network | Subroutine | Saves the network to a file. |
| set_activation | mod_layer | Subroutine | Sets the activation function. Input string must match one of provided activation functions, otherwise it defaults to sigmoid. If activation not present, defaults to sigmoid. |
| set_activation_equal | mod_network | Subroutine | A thin wrapper around layer % set_activation(). This method can be used to set an activation function for all layers at once. |
| set_activation_layers | mod_network | Subroutine | A thin wrapper around layer % set_activation(). This method can be used to set different activation functions for each layer separately. |
| sigmoid | mod_activation | Function | Sigmoid activation function. |
| sigmoid_prime | mod_activation | Function | First derivative of the sigmoid activation function. |
| step | mod_activation | Function | Step activation function. |
| step_prime | mod_activation | Function | First derivative of the step activation function. |
| sync | mod_network | Subroutine | Broadcasts network weights and biases from specified image to all others. |
| tanh_prime | mod_activation | Function | First derivative of the tanh activation function. |
| tanhf | mod_activation | Function | Tangent hyperbolic activation function. Same as the intrinsic tanh, but must be defined here so that we can use procedure pointer with it. |
| tile_indices | mod_parallel | Function | Given input global array size, return start and end index of a parallel 1-d tile that correspond to this image. start and end indices assuming equal tile sizes if we have any remainder, distribute it to the tiles at the end |
| train_batch | mod_network | Subroutine | Trains a network using input data x and output data y, and learning rate eta. The learning rate is normalized with the size of the data batch. mini-batch size number of layers |
| train_epochs | mod_network | Subroutine | Trains for num_epochs epochs with mini-bachtes of size equal to batch_size. |
| train_single | mod_network | Subroutine | Trains a network using a single set of input data x and output data y, and learning rate eta. |
| update | mod_network | Subroutine | Updates network weights and biases with gradients dw and db, scaled by learning rate eta. update biases update weights |