Note optionally: to increase the number of inputs/connectivity for each neuron 
(e.g. A), each sequential input (e.g. X1; X2) could detect some static 
combination of inputs (Numenta uses the terminology "pattern" here) from the 
previous layer (e.g. x1-1, x1-2, x1-3, x1-4; x2-1 x2-2, x2-3, x2-4, x2-5). Each 
of these combinations (e.g. summation) could be passed through an activation 
function (to reach some detection threshold). The sequence order would only be 
enforced between each amalgamated input (X1; X2).

Standard artificial neural networks apply some function (e..g W*X + b) to the 
input of each neuron, along with some non-linear activation function (e.g. 
sigmoid/Relu). They do not discriminate with respect to the order in which the 
input (previous) layer neurons are fired. Given that we know neurons (through 
distal connections) can become biased towards firing based on some previous 
network state, it would make sense to test such conditions in an existing 
(known functional) architecture/learning algorithm.

Thanks I will check out the paper.





---
[Visit 
Topic](https://discourse.numenta.org/t/standard-ann-with-sequentially-activated-neuronal-input/7021/3)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discourse.numenta.org/email/unsubscribe/34e8328e04544bca3cbb57741627fc3eb78602059599be3534e00ae0fb440f5c).

Reply via email to