Regarding ML implementations of neuroscience findings. Assuming some sequential input (e.g. 1D numeric/binary), has your team considered simply creating a standard ANN (backprop trained if differentiable) but in which each neuron must receive input in some predefined sequence (eg input X1 must fire before input X2)? Multiple neurons may be connected to the exact same input (previous layer) neurons, but have different constraints on their firing order (higher layer neuron A->X1;X2, higher layer neuron B->X2;X1). This should introduce a non-linearity into the network in and of itself. The network would have to be sparsely connected because the number of permutations of each possible input sequence for a neuron increases exponentially with number of inputs.
--- [Visit Topic](https://discourse.numenta.org/t/standard-ann-with-sequentially-activated-neuronal-input/7021/1) or reply to this email to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discourse.numenta.org/email/unsubscribe/315c77fe99002fbe97b8899ed4cfd0667c92f5cf8bb6a35ef89a29bddb8b9590).