Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
Damir Dobric committed May 31, 2021
2 parents 450e1dd + 334a084 commit ce245dc
Show file tree
Hide file tree
Showing 8 changed files with 85 additions and 23 deletions.
21 changes: 10 additions & 11 deletions NeoCortexApi/Documentation/experiments.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,23 +23,23 @@ The ability to recognize and predict temporal sequences of sensory inputs is vit

[Download student paper here](./Experiments/ML-19-20_20-5.4_CellsPerColumnExperiment_Paper.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/CellsPerColumnExperimentTest.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/CellsPerColumnExperimentTest.cs)

#### **HTM Sparsity**

It is necessary for survival in natural environment to be able to identify and predict temporal sequence of sensory input. Based on numerous common properties of the cortical neurons, the theoretical framework for sequence learning in the neo cortex recently proposed hierarchical temporal memory (HTM). In this paper, we analyze the sequence learning behavior of spatial pooler and temporal memory layer in dependence on HTM Sparsity. We found the ideal value of HTM Sparsity that will have optimal learning for the given input sequence. We also showed the effect of changing Width and Input Bits on learning such that the value of HTM Sparsity remains constant. We devised a relation between HTM Sparsity and max for optimal learning of the given sequence.

[Download student paper here](./Experiments/ML-19-20_20-5.4_HtmSparsityExperiments_Paper.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/HtmSparsityTest.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/HtmSparsityTest.cs)

#### **Parameter Change Experiment**

Hierarchical Temporal Memory (HTM) is based on the supposition that the world has a structure and is therefore predictable. The development of HTM for Artificial Neural Networks has led to an advancement in the field of artificial intelligence and leading the computing intelligence to a new age. In this paper, we studied various learning parameters like Width(W), Input Bits(N), Max,Min values and the number of columns, that majorly contribute to optimize the sequence learning behavior of spatial pooler and temporal memory layer. We also performed experiment to obtain stability of Spatial Pooler output by tuning the boost and duty cycles. We evaluated each of these parameters based on the theoretical and practical framework and summarized the results in graphical diagrams.

[Download student paper here](./Experiments/ML-19-20_20-5.4_ParameterChangeExperiment_Paper.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/InputBitsExperimentTest.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/InputBitsExperimentTest.cs)

## Performance Spatial Pooler between Global and Local Inhibition

Expand All @@ -59,7 +59,7 @@ Each region in the cortex receives input through millions of axons from sensory

[Download student paper here](./Experiments/ML-19-20_20-5.7_PerformanceSpatialPooler-between-Global-and-Local-Inhibition.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SpatialPoolerInhibitionExperimentalTests.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SpatialPoolerInhibitionExperimentalTests.cs)

## Investigation of Hierarchical Temporal Memory Spatial Pooler's Noise Robustness against Gaussian noise

Expand All @@ -79,7 +79,7 @@ The Thousand Brains Theory of Intelligence is a new and rising approach to under

[Download student paper here](./Experiments/ML-19-20_20-5.12_SpatialPooler_NoiseRobustness.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests\GaussianNoiseExperiment.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/GaussianNoiseExperiment.cs)

## Validate Memorizing capabilities of SpatialPooler

Expand All @@ -100,7 +100,7 @@ The main objective of the project is to describe memorizing capabilites as the a

[Download student paper here](./Experiments/ML-19-20_20-5.10_ValdatingMemorizingCapabilitesOfSpatialPooler.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SpatialPoolerMemorizingExperiment84.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SpatialPoolerMemorizingExperiment84.cs)

## ML19/20-5.2. Improving of implementation of the Scalar encoder in HTM

Expand Down Expand Up @@ -137,7 +137,7 @@ The image classification is a classical problem of image processing; machine lea

[Download student paper here](./Experiments/ML-19-20_20-5.11_SchemaImageClassification.pdf)

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SchemaImageClassificationExperiment.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SchemaImageClassificationExperiment.cs)

## Sequence Learning - Music Notes Experiment

Expand All @@ -152,7 +152,7 @@ To demonstrate learning of sequences, I have originally developed an experiment
Every music note is represented as a scalar value, which appear in the sequence of notes. For example, notes C, D, E, F, G and H can be associated with the scalar values: C-0, D-1, E-2, F-3, G-4, H-5. By following that rule notes of some has been taken. In the very first experiment the song _twinkle, twinkle little star_ was used in the experiment: [here] (https://www.bethsnotesplus.com/2013/08/twinkle-twinkle-little-star.html).
Over time, the experiment has grown, but we have kept the original name '_Music Notes Experiment_'. In this experiment various outputs are generated, which trace the state of active columns and active cells during the learning process. Today, we use this experiment to learn how HTM learns sequences.

[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SequenceLearningExperiments/MuscNotesExperiment.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SequenceLearningExperiments/MuscNotesExperiment.cs)

## On the Relationship Between Input Sparsity and Noise Robustness in SP (Paper)

Expand Down Expand Up @@ -234,7 +234,6 @@ Issue 70

HTM Feedforward network is a multilayer based artificial neural orchestrate which is a biologically propelled show of a single cortical column of the NeoCortex, is a set of six layers of the portion of mammalian cerebral cortex wherever the higher cognitive functioning is acknowledged to originate from. Previous findings in neurosciences show that there is the presence of two sets of Feedforward network in the single cortical column of the human brain among them layer L4-L2 Feedforward network plays the active role to learn new things from the environment. Within the L4-L2 Feedforward network arrange, the lower layer L4 takes the sensory data directly as input from the environment and passes the processed data to layer L2 to perform cognitive predicting & learning functions in the brain. In this paper, the idea to implement the layer L4-L2 based HTM Feed Forward network is demonstrated utilizing the most recent adaptation of NeocortexApi package, which is an open-source solution of the Hierarchical Temporal Memory Cortical Learning Algorithm. Besides, it is also examined how the implemented L4-L2 Feedforward network arrangement behaves at upper layer L2 in case of sequence learning and predicting using HTM Classifier. Aside from that, NuMenta's discoveries and guidelines are investigated as well. The results show that the proposed L4-L2 based HTM Feedforward network with NeocortexApi can learn and predict sequential data patterns with precision in the upper layer region.

[Download student paper here](./Experiments/ML-20-21_20-5.2_HTM%20FeedForward_Network.pdf)

[Download student paper here](./Experiments/ML-20-21_20-5.2_HTM FeedForward_Network.pdf)

[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/FeedForwardExperiment_L4L2.cs)
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/FeedForwardExperiment_L4L2.cs)
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
using System.Threading.Tasks;
using NeoCortexApi.Classifiers;

namespace NeoCortexApi.Experiments.SequenceLearningExperiments
namespace NeoCortexApi.Experiments
{
[TestClass]
public class MuscNotesExperiment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ public void SpatialSimilarityExperimentTest()
DutyCyclePeriod = 100,
MinPctOverlapDutyCycles = minOctOverlapCycles,
StimulusThreshold = 5,
GlobalInhibition = true,
GlobalInhibition = false,
NumActiveColumnsPerInhArea = 0.02 * numColumns,
PotentialRadius = (int)(0.15 * inputBits),
LocalAreaDensity = -1,//0.5,
LocalAreaDensity = 0.5,
ActivationThreshold = 10,
MaxSynapsesPerSegment = (int)(0.01 * numColumns),
Random = new ThreadSafeRandom(42)
Expand Down
4 changes: 3 additions & 1 deletion NeoCortexApi/NeoCortexApi/SpatialPooler.cs
Original file line number Diff line number Diff line change
Expand Up @@ -947,7 +947,9 @@ public virtual int[] InhibitColumnsLocalOriginal(Connections mem, double[] overl
int numActive = (int)(0.5 + density * neighborhood.Length);

//
// Column is added as a winner one if the number of higher overlapped columns than the actual column
// numActive is the number of maximal active columns in the neighborhood.
// numHigherOverlap is the number of columns in the neighborhood that have higher overlap than the referencing column.
// Column is added as a winner one if the number of higher overlapped columns
// is less than number of active columns defined by density and radius.
if (numHigherOverlap < numActive)
{
Expand Down
43 changes: 36 additions & 7 deletions NeoCortexApi/NeoCortexApi/TemporalMemory.cs
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,26 @@ public void Init(Connections conn)
/// <returns></returns>
/// <remarks>Note: PredictiveCells are not calculated here. They are calculated on demand from active segments.</remarks>
public ComputeCycle Compute(int[] activeColumns, bool learn)
{
return Compute(activeColumns, learn, null, null);
}


/// <summary>
/// Performs the whole calculation of Temporal memory algorithm.
/// Calculation takes two parts:
/// <list type="number">
/// <item>Calculation of the cells, which become active in the current cycle.</item>
/// <item>Calculation of dendrite segments which becom active in the current cycle.</item>
/// </list>
/// </summary>
/// <param name="activeColumns"></param>
/// <param name="learn"></param>
/// <param name="externalPredictiveInputsActive">Experimental.</param>
/// <param name="externalPredictiveInputsWinners">Experimental.</param>
/// <returns></returns>
/// <remarks>Note: PredictiveCells are not calculated here. They are calculated on demand from active segments.</remarks>
public ComputeCycle Compute(int[] activeColumns, bool learn, int[] externalPredictiveInputsActive = null, int[] externalPredictiveInputsWinners = null)
{
Stopwatch sw = new Stopwatch();
sw.Start();
Expand All @@ -101,7 +121,7 @@ public ComputeCycle Compute(int[] activeColumns, bool learn)

sw.Restart();

ActivateDendrites(this.connections, cycle, learn);
ActivateDendrites(this.connections, cycle, learn, externalPredictiveInputsActive, externalPredictiveInputsWinners);

sw.Stop();

Expand Down Expand Up @@ -257,8 +277,15 @@ protected virtual ComputeCycle ActivateCells(Connections conn, int[] activeColum
/// <param name="conn">the Connectivity</param>
/// <param name="cycle">Stores current compute cycle results</param>
/// <param name="learn">If true, segment activations will be recorded. This information is used during segment cleanup.</param>
protected void ActivateDendrites(Connections conn, ComputeCycle cycle, bool learn)
/// <seealso cref="">https://github.com/htm-community/htm.core/blob/master/src/htm/algorithms/TemporalMemory.cpp</seealso>
protected void ActivateDendrites(Connections conn, ComputeCycle cycle, bool learn, int[] externalPredictiveInputsActive = null, int[] externalPredictiveInputsWinners = null)
{
//if (externalPredictiveInputsActive != null)
// cycle.ActiveCells.AddRange(externalPredictiveInputsActive);

//if (externalPredictiveInputsWinners != null)
// cycle.WinnerCells.AddRange(externalPredictiveInputsActive);

SegmentActivity activity = conn.ComputeActivity(cycle.ActiveCells, conn.HtmConfig.ConnectedPermanence);

var activeSegments = new List<DistalDendrite>();
Expand Down Expand Up @@ -355,8 +382,8 @@ public void Reset(Connections connections)


/// <summary>
/// TM acitivates segments on the column in the previous cycle. This method locates such segments and
/// adapts them.
/// TM activated segments on the column in the previous cycle. This method locates such segments and
/// adapts them and return owner cells of active segments.
/// </summary>
/// <param name="conn"></param>
/// <param name="columnActiveSegments">Active segments as calculated (activated) in the previous step.</param>
Expand All @@ -371,10 +398,10 @@ protected List<Cell> ActivatePredictedColumn(Connections conn, List<DistalDendri
List<DistalDendrite> matchingSegments, ICollection<Cell> prevActiveCells, ICollection<Cell> prevWinnerCells,
double permanenceIncrement, double permanenceDecrement, bool learn, IList<Synapse> activeSynapses)
{
// List of cells that owns active segments. These cells will be activated in this cycle.
// In previous cycle they are depolarized.
List<Cell> cellsOwnersOfActiveSegments = new List<Cell>();
//Cell previousCell = null;
//Cell segmOwnerCell;


foreach (DistalDendrite segment in columnActiveSegments)
{
if (!cellsOwnersOfActiveSegments.Contains(segment.ParentCell))
Expand Down Expand Up @@ -413,6 +440,8 @@ protected List<Cell> ActivatePredictedColumn(Connections conn, List<DistalDendri
{
AdaptSegment(conn, segment, prevActiveCells, permanenceIncrement, permanenceDecrement);

//
// Even if the segment is active, new synapses can be added that connect previously active cells with the segment.
int numActive = conn.LastActivity.PotentialSynapses[segment.SegmentIndex];
int nGrowDesired = conn.HtmConfig.MaxNewSynapseCount - numActive;

Expand Down
2 changes: 1 addition & 1 deletion NeoCortexApi/NeoCortexEntities/Entities/Connections.cs
Original file line number Diff line number Diff line change
Expand Up @@ -838,7 +838,7 @@ public SegmentActivity ComputeActivity(ICollection<Cell> activeCellsInCurrentCyc
// This cell is the active in the current cycle.
// We step through all receptor synapses and check the permanence value of related synapses.
// Receptor synapses are synapses whose source cell (pre-synaptic cell) is the given cell.
// Receptor synapses connect their axons to distal dendrite segments of other cells.
// Receptor synapses connect the cell's axons to distal dendrite segments of other cells.
// The permanence value of this connection indicates the the cell owner of connected distal dendrite is expected
// to be activated in the next cycle.
// The segment owner cell in other column pointed by synapse sourced by this 'cell' is depolirized (in predicting state).
Expand Down
1 change: 1 addition & 0 deletions Python/ColumnActivityDiagram/draw_figure.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@

import os

# py -m ensurepip
# pip install plotly
# Without pip: py -m pip install plotly
# py draw_figure.py -fn "C:\dev\devops-daenet\NeoCortexApi\NeoCortexApi\UnitTestsProject\bin\Debug\netcoreapp3.1\exp.csv" -gn Digit -mc 1000 -ht 15 -yt "column indicies" -xt cycle -st "Predicted/Expected/Predictive Cells" -fign CortialColumn
Expand Down
Loading

0 comments on commit ce245dc

Please sign in to comment.