Recently I recognized more clearly that step activation functions in single
layer neural networks give the best performance in terms of learning speed and
separation of similar inputs.
I use the signof function:
fn(x) = 1, x>=0
fn(x) =-1, x<0
Or a soft version:
fn(x) = sqr(x), x>=0
fn(x) =-sqr(-x), x<0
There are very fast bit hack versions of the square root function if you need
them.
Anyway this paper provides some justification:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3921404/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3921404/)
---
[Visit Topic](https://discourse.numenta.org/t/step-activation-functions/2295/1)
or reply to this email to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discourse.numenta.org/email/unsubscribe/ad4738f48233d668c04097023e9779281c8948be10b33646e79ad616458ad0fb).