Hi,

Following convention below:
y(t) = Ax(t)+Bu(t)+eps(t) # observation eq
x(t) = Cx(t-1)+Du(t)+eta(t) # state eq

I modified the following routine (which I copied from: 
http://www.stat.pitt.edu/stoffer/tsa2/Rcode/Kall.R) to accommodate u(t), an 
exogenous input to the system. 

for (i in 2:N){
 xp[[i]]=C%*%xf[[i-1]]
 Pp[[i]]=C%*%Pf[[i-1]]%*%t(C)+Q
   siginv=A[[i]]%*%Pp[[i]]%*%t(A[[i]])+R
 sig[[i]]=(t(siginv)+siginv)/2     # make sure sig is symmetric
   siginv=solve(sig[[i]])          # now siginv is sig[[i]]^{-1}
 K=Pp[[i]]%*%t(A[[i]])%*%siginv
 innov[[i]]=as.matrix(yobs[i,])-A[[i]]%*%xp[[i]]
 xf[[i]]=xp[[i]]+K%*%innov[[i]]
 Pf[[i]]=Pp[[i]]-K%*%A[[i]]%*%Pp[[i]]
 like= like + log(det(sig[[i]])) + t(innov[[i]])%*%siginv%*%innov[[i]]
 }
   like=0.5*like
   list(xp=xp,Pp=Pp,xf=xf,Pf=Pf,like=like,innov=innov,sig=sig,Kn=K)
}

I tried to fit my problem and observe that I got positive log likelihood mainly 
because the log of determinant of my variance matrix is largely negative. 
That's not good because they should be positive. Have anyone experience this 
kind of instability?

Also, I realize that I have about 800 sample points. The above routine when 
being plugged to optim becomes very slow. Could anyone share a faster way to 
compute kalman filter? 

And my last problem is, optim with my defined feasible space does not converge. 
I have about 20 variables that I need to identify using MLE method. Is there 
any other way that I can try out? I tried most of the methods available in 
optim already. They do not converge at all...... Thank you.

- adschai

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to