You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: NeoCortexApi/Documentation/experiments.md
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,23 +23,23 @@ The ability to recognize and predict temporal sequences of sensory inputs is vit
23
23
24
24
[Download student paper here](./Experiments/ML-19-20_20-5.4_CellsPerColumnExperiment_Paper.pdf)
25
25
26
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/CellsPerColumnExperimentTest.cs)
26
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/CellsPerColumnExperimentTest.cs)
27
27
28
28
#### **HTM Sparsity**
29
29
30
30
It is necessary for survival in natural environment to be able to identify and predict temporal sequence of sensory input. Based on numerous common properties of the cortical neurons, the theoretical framework for sequence learning in the neo cortex recently proposed hierarchical temporal memory (HTM). In this paper, we analyze the sequence learning behavior of spatial pooler and temporal memory layer in dependence on HTM Sparsity. We found the ideal value of HTM Sparsity that will have optimal learning for the given input sequence. We also showed the effect of changing Width and Input Bits on learning such that the value of HTM Sparsity remains constant. We devised a relation between HTM Sparsity and max for optimal learning of the given sequence.
31
31
32
32
[Download student paper here](./Experiments/ML-19-20_20-5.4_HtmSparsityExperiments_Paper.pdf)
33
33
34
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/HtmSparsityTest.cs)
34
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/HtmSparsityTest.cs)
35
35
36
36
#### **Parameter Change Experiment**
37
37
38
38
Hierarchical Temporal Memory (HTM) is based on the supposition that the world has a structure and is therefore predictable. The development of HTM for Artificial Neural Networks has led to an advancement in the field of artificial intelligence and leading the computing intelligence to a new age. In this paper, we studied various learning parameters like Width(W), Input Bits(N), Max,Min values and the number of columns, that majorly contribute to optimize the sequence learning behavior of spatial pooler and temporal memory layer. We also performed experiment to obtain stability of Spatial Pooler output by tuning the boost and duty cycles. We evaluated each of these parameters based on the theoretical and practical framework and summarized the results in graphical diagrams.
39
39
40
40
[Download student paper here](./Experiments/ML-19-20_20-5.4_ParameterChangeExperiment_Paper.pdf)
41
41
42
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests/InputBitsExperimentTest.cs)
42
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/InputBitsExperimentTest.cs)
43
43
44
44
## Performance Spatial Pooler between Global and Local Inhibition
45
45
@@ -59,7 +59,7 @@ Each region in the cortex receives input through millions of axons from sensory
59
59
60
60
[Download student paper here](./Experiments/ML-19-20_20-5.7_PerformanceSpatialPooler-between-Global-and-Local-Inhibition.pdf)
61
61
62
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SpatialPoolerInhibitionExperimentalTests.cs)
62
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SpatialPoolerInhibitionExperimentalTests.cs)
63
63
64
64
## Investigation of Hierarchical Temporal Memory Spatial Pooler's Noise Robustness against Gaussian noise
65
65
@@ -79,7 +79,7 @@ The Thousand Brains Theory of Intelligence is a new and rising approach to under
79
79
80
80
[Download student paper here](./Experiments/ML-19-20_20-5.12_SpatialPooler_NoiseRobustness.pdf)
81
81
82
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/CortexNetworkTests\GaussianNoiseExperiment.cs)
82
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/CortexNetworkTests/GaussianNoiseExperiment.cs)
83
83
84
84
## Validate Memorizing capabilities of SpatialPooler
85
85
@@ -100,7 +100,7 @@ The main objective of the project is to describe memorizing capabilites as the a
100
100
101
101
[Download student paper here](./Experiments/ML-19-20_20-5.10_ValdatingMemorizingCapabilitesOfSpatialPooler.pdf)
102
102
103
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SpatialPoolerMemorizingExperiment84.cs)
103
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SpatialPoolerMemorizingExperiment84.cs)
104
104
105
105
## ML19/20-5.2. Improving of implementation of the Scalar encoder in HTM
106
106
@@ -137,7 +137,7 @@ The image classification is a classical problem of image processing; machine lea
137
137
138
138
[Download student paper here](./Experiments/ML-19-20_20-5.11_SchemaImageClassification.pdf)
139
139
140
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SchemaImageClassificationExperiment.cs)
140
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SchemaImageClassificationExperiment.cs)
141
141
142
142
## Sequence Learning - Music Notes Experiment
143
143
@@ -152,7 +152,7 @@ To demonstrate learning of sequences, I have originally developed an experiment
152
152
Every music note is represented as a scalar value, which appear in the sequence of notes. For example, notes C, D, E, F, G and H can be associated with the scalar values: C-0, D-1, E-2, F-3, G-4, H-5. By following that rule notes of some has been taken. In the very first experiment the song _twinkle, twinkle little star_ was used in the experiment: [here] (https://www.bethsnotesplus.com/2013/08/twinkle-twinkle-little-star.html).
153
153
Over time, the experiment has grown, but we have kept the original name '_Music Notes Experiment_'. In this experiment various outputs are generated, which trace the state of active columns and active cells during the learning process. Today, we use this experiment to learn how HTM learns sequences.
154
154
155
-
[Check out implementation here](../../NeoCortexApi/UnitTestsProject/SequenceLearningExperiments/MuscNotesExperiment.cs)
155
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/SequenceLearningExperiments/MuscNotesExperiment.cs)
156
156
157
157
## On the Relationship Between Input Sparsity and Noise Robustness in SP (Paper)
158
158
@@ -234,7 +234,6 @@ Issue 70
234
234
235
235
HTM Feedforward network is a multilayer based artificial neural orchestrate which is a biologically propelled show of a single cortical column of the NeoCortex, is a set of six layers of the portion of mammalian cerebral cortex wherever the higher cognitive functioning is acknowledged to originate from. Previous findings in neurosciences show that there is the presence of two sets of Feedforward network in the single cortical column of the human brain among them layer L4-L2 Feedforward network plays the active role to learn new things from the environment. Within the L4-L2 Feedforward network arrange, the lower layer L4 takes the sensory data directly as input from the environment and passes the processed data to layer L2 to perform cognitive predicting & learning functions in the brain. In this paper, the idea to implement the layer L4-L2 based HTM Feed Forward network is demonstrated utilizing the most recent adaptation of NeocortexApi package, which is an open-source solution of the Hierarchical Temporal Memory Cortical Learning Algorithm. Besides, it is also examined how the implemented L4-L2 Feedforward network arrangement behaves at upper layer L2 in case of sequence learning and predicting using HTM Classifier. Aside from that, NuMenta's discoveries and guidelines are investigated as well. The results show that the proposed L4-L2 based HTM Feedforward network with NeocortexApi can learn and predict sequential data patterns with precision in the upper layer region.
236
236
237
+
[Download student paper here](./Experiments/ML-20-21_20-5.2_HTM%20FeedForward_Network.pdf)
237
238
238
-
[Download student paper here](./Experiments/ML-20-21_20-5.2_HTM FeedForward_Network.pdf)
239
-
240
-
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/FeedForwardExperiment_L4L2.cs)
239
+
[Check out implementation here](../../NeoCortexApi/NeoCortexApi.Experiments/FeedForwardExperiment_L4L2.cs)
0 commit comments