You are required to provide specifications for all the activation functions,
the number of which is equal to the number of hidden layers plus 1. Thus in
your case actfns=c(2,2) would suffice. 


BKMooney wrote:
> 
> Hi,
> 
> I am trying to build a VERY basic neural network as a practice before
> hopefully increasing my scope.  To do so, I have been using package
> "neural"
> and the MLP related functions (mlp and mlptrain) within that package.
> 
> So far, I have created a basic network, but I have been unable to change
> the
> default activation function.  If someone has a suggestion, please advise.
> 
> The goal of the network is to properly classify a number as positive or
> negative.  Simple 1-layer network with a single neuron in each layer.
> 
> Rcode:
> trainInput <- matrix(rnorm(10))
> trainAnswers <- ifelse(trainInput <0, -1, 1)
> 
> trainNeurons <- 1
> 
> trainingData <- mlptrain(inp=trainInput, neurons=trainNeurons,
> out=trainAnswers, it=1000)
> 
> ##  To call this network, we can see how it works on a set of known
> positive
> and negative values
> 
> testInput <- matrix(-2:2)
> mlp(testInput, trainingData$weight, trainingData$dist,
> trainingData$neurons,
> trainingData$actfns)
> 
> Will vary - but output on my computer was:
>             [,1]
> [1,] 0.001043291
> [2,] 0.001045842
> [3,] 0.072451270
> [4,] 0.950744548
> [5,] 0.950931168
> 
> So it's instead classifying the negatives as 0 and positives as 1 (getting
> close to, anyhow - increasing the number of iterations, ie it=5000, makes
> that more clear)
> This results in a neural net with activation function 1/(1+exp(-x)) -
> which
> will never result in the -1 value that the answers contain.
> 
> The documentation for package neural specifies the parameter "actfns",
> which
> should be a list containing the numeric code for the activation functions
> of
> each layer - however, anytime I try to put in a value for "actfns"  (such
> as
> actfns=2 for hyperbolic tangent), I get the error:
> 
> "Different activation function and active layer number"
> 
> If anyone can shed light on what I'm doing wrong here with the activation
> functions or how to change the activation functions, I'd really appreciate
> it.
> 
> Thanks!
> 
>       [[alternative HTML version deleted]]
> 
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Activation-Functions-in-Package-Neural-tp24629050p25140965.html
Sent from the R help mailing list archive at Nabble.com.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to