Hello. This is not a question about a bug or even best practices; rather I'm trying to understand the philosophy or theory as to why certain portions of the R codebase are written as they are. If this question is better posed elsewhere, please point me in the proper direction.
In the thread about the issues with the Tukey line, Martin said [1]: > when this topic came up last (for me) in Dec. 2014, I did spend about 2 days > work (or more?) > to get the FORTRAN code from the 1981 - book (which is abbreviated the "ABC > of EDA") > from a somewhat useful OCR scan into compilable Fortran code and then f2c'ed, > wrote an R interface function found problems… I have seen this in the R source code and elsewhere, that native Fortran is converted to C via f2c and then run as C within R. This is notwithstanding R's ability to use Fortran, either directly through .Fortran() [2] or via .Call() using simple helper C-wrappers [3]. I'm curious as to the reason. Is it because much of the code was written before Fortran 90 compilers were freely available? Does it help with maintenance or make debugging easier? Is it faster or more likely to compile cleanly? Thank you, Avi [1] https://stat.ethz.ch/pipermail/r-devel/2017-May/074363.html [2] Such as kmeans does for the Hartigan-Wong method in the stats package [2] Such as the mvtnorm package does ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel