On Fri, 28 Apr 2017, T.Riedle wrote:

Dear all,

I am trying to run an ADF test using the adf.test() function in the tseries package and the ur.df() function in the urca package. The results I get contrast sharply. Whilst the adf.test() indicates stationarity which is in line with the corresponding graph, the ur.df() indicates non-stationarity.

Why does this happen?

This is likely due to different setting for the deterministic part of the model and/or the number of lags tested. The defaults of ur.df() are often not suitable for many practical applications which might to spurious significant results.

Could anybody explain the adf.test() function in more detail? How does adf.test() select the number of lags is it AIC or BIC and how does it take an intercept and/or a trend into account?

There is a deterministic trend and the default number of lags is selected by a heuristic.

At

https://stats.stackexchange.com/questions/168332/r-augmented-dickey-fuller-adf-test/168355#168355

I've summarized an overview that I had written for my students. It might also be helpful for you.

hth,
Z

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to