[Rd] attach and object masking
The library() function prints an information message when objects are masked on the search path by (or from) a newly attached package. The attach() function, however, doesn't. Is there a good reason for this? I have adapted the machinery for checking conflicts used in the library () function to attach(). If there are no objections, I will check this into R-devel. I have seen new users get into terrible trouble with attach, e.g. 1) attaching the same database multiple times, or 2) Masking a vector in an attached data frame with a transformed version in the global environment and *then* re-running a script that transforms the transformed variable. For example, run this snippet twice and you might get a surprise. foo <- data.frame(x=1:10) attach(foo) x <- x/2 A warning would certainly help. Incidentally, (1) occurs in the example for the beavers data set. I didn't find any other problems. __ [EMAIL PROTECTED] mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] memory allocation problem under linux
On Mon, 2005-06-13 at 12:12 +0200, [EMAIL PROTECTED] wrote: > Scrive Prof Brian Ripley : > > > You keep on sending similar messages -- this is at least the third. You > > need to find out where the segfault is occurring using gdb, and you have > > not told us. > > Sorry for the repeated post (in 2 different mailing lists). > Tnx for your suggestion. Now I think I've found the problem. Try this: > file foo.c > #include > int **box; > void foo(){ > int i; > box = (int**)R_alloc(1, sizeof(int *)); > } > / > Compiled with R CMD SHLIB foo.c > In R: > >dyn.load("foo.so") > >.C("foo") > *Segmentation fault* > The problem disappears when the declaration of 'box' comes inside the function > foo... Is this a bug? It gives you a hint about what the problem is. Your global variable "box" is conflicting with another symbol. I tracked this down to the ncurses library, to which R is linked under Linux, but not Windows. [EMAIL PROTECTED] ~]$ nm /usr/lib/libncurses.so | grep box - 07a6d036 T box This explains why your problem is platform-specific. You should declare "box" to be static. M. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 2.1.1 slated for June 20
On Tue, 2005-06-14 at 17:07 -0500, Marc Schwartz wrote: > On Tue, 2005-06-14 at 23:52 +0200, Peter Dalgaard wrote: > > Marc Schwartz <[EMAIL PROTECTED]> writes: > > > > > On Fri, 2005-06-10 at 14:57 +0100, Prof Brian Ripley wrote: > > > > On Fri, 10 Jun 2005, Peter Dalgaard wrote: > > > > > > > > > The next version of R will be released (barring force majeure) on June > > > > > 20th, with beta versions available starting Monday. > > > > > > > > > > Please do check them on your system *before* the release this time... > > > > > > > > Some things which it would be particularly helpful to have tested: > > > > > > > > > > - Bleeding-edge OSes, e.g. anyone running Fedora Core 4 test 3? (These > > > >often show up problems with bugs in the pre-release versions of > > > >components such as X11 and compilers.) > > > > > > > > > Just as a quick heads up, I installed FC4 Release ("Stentz") late > > > yesterday. > > > > > > R (Version 2.1.1 beta (2005-06-14)) compiles fine using: > > > > > > $ gcc --version > > > gcc (GCC) 4.0.0 20050519 (Red Hat 4.0.0-8) > > > > > > and make check-all passes with no problems. > > > > > > I have also installed all CRAN packages that do not require other 3rd > > > party drivers, etc. and there were no observed errors in those cases. > > > > > > So far, so good. > > > > > > If anything comes up, I will post a follow up. > > > > > > Best regards, > > > > > > Marc Schwartz > > > > Yep. Just tried the same on AMD64 (I had a bit of a fight converting > > my SuSE setup -- FC4 is quite unhappy about ReiserFS for some reason). > > A couple of f95 warnings whooshed by during the compile, that was all. > > > > By the way, I noticed that you can now "yum install R R-devel" and get > > everything straight from Fedora Extras. > > Yep. Tom "Spot" Callaway is the FE maintainer for R. I had a look at his RPM last night. It includes a patch for gcc4, which fails to build R with the fairly aggressive optimizations used by rpmbuild. ("-O2 -D_FORTIFY_SOURCE=2" will reproduce the bug, IIRC, but I'm not upgrading my work PC just yet, so I can't be sure). I folded this into R-patched. It's a shame he didn't send a bug report or, if he did, I missed it. I also note he is using the patch that sets LANG=C, which is obsolete now that R supports utf-8 locales. I'll write to him (cc Marc) to let him know about these changes. The RedHat RPMS also use the shared library version of R. I've been thinking about making this change myself, despite the substantial speed penalty, since I've seen a growing number of people recompiling to get the shared library. The Red Hat choice forces my hand though: I don't want people upgrading from their R 2.1.0 to my R 2.1.1 and finding their installed packages don't work anymore. The $64,000 question is how many people are going to care about that 15-20% decrease in speed. Speak up now if it concerns you. > A lot of things for FC 4 have been moved to Extras and there are of > course new things (like R) there as well. The restrictions on non-GPL > components is still there, so things like MP3 functionality is available > via third party repos such as FreshRPMS, Livna, etc. > > This was a "balancing" act between trying to reduce (manage) the size of > the main distro and reducing real or perceived redundancies in packages. > > So, for example: > > 1. Include OO.org in Core, but move Gnumeric to Extras > > 2. Include Emacs in Core, but move XEmacs to Extras > > Needless to say, that resulted in "emotional" discussions. This underscores the fact Fedora, despite claiming to be a community project, is essentially Red Hat's rolling beta programme, and so must be more focused and less inclusive than Debian. (Just an observation. I don't want to start a discussion on FOSS politics) Martyn --- This message and its attachments are strictly confidential. If you are not the intended recipient of this message, please immediately notify the sender and delete it. Since its integrity cannot be guaranteed, its content cannot involve the sender's responsibility. Any misuse, any disclosure or publication of its content, either whole or partial, is prohibited, exception made of formally approved use __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 2.1.1 slated for June 20
On Thu, 2005-06-16 at 12:41 +0100, Prof Brian Ripley wrote: > On Thu, 16 Jun 2005, Martyn Plummer wrote: > > > On Tue, 2005-06-14 at 17:07 -0500, Marc Schwartz wrote: > >> On Tue, 2005-06-14 at 23:52 +0200, Peter Dalgaard wrote: > >>> Marc Schwartz <[EMAIL PROTECTED]> writes: > >>> > >>>> On Fri, 2005-06-10 at 14:57 +0100, Prof Brian Ripley wrote: > >>>>> On Fri, 10 Jun 2005, Peter Dalgaard wrote: > >>>>> > >>>>>> The next version of R will be released (barring force majeure) on June > >>>>>> 20th, with beta versions available starting Monday. > >>>>>> > >>>>>> Please do check them on your system *before* the release this time... > >>>>> > >>>>> Some things which it would be particularly helpful to have tested: > >>>> > >>>> > >>>>> - Bleeding-edge OSes, e.g. anyone running Fedora Core 4 test 3? (These > >>>>>often show up problems with bugs in the pre-release versions of > >>>>>components such as X11 and compilers.) > >>>> > >>>> > >>>> Just as a quick heads up, I installed FC4 Release ("Stentz") late > >>>> yesterday. > >>>> > >>>> R (Version 2.1.1 beta (2005-06-14)) compiles fine using: > >>>> > >>>> $ gcc --version > >>>> gcc (GCC) 4.0.0 20050519 (Red Hat 4.0.0-8) > >>>> > >>>> and make check-all passes with no problems. > >>>> > >>>> I have also installed all CRAN packages that do not require other 3rd > >>>> party drivers, etc. and there were no observed errors in those cases. > >>>> > >>>> So far, so good. > >>>> > >>>> If anything comes up, I will post a follow up. > >>>> > >>>> Best regards, > >>>> > >>>> Marc Schwartz > >>> > >>> Yep. Just tried the same on AMD64 (I had a bit of a fight converting > >>> my SuSE setup -- FC4 is quite unhappy about ReiserFS for some reason). > >>> A couple of f95 warnings whooshed by during the compile, that was all. > >>> > >>> By the way, I noticed that you can now "yum install R R-devel" and get > >>> everything straight from Fedora Extras. > >> > >> Yep. Tom "Spot" Callaway is the FE maintainer for R. > > > > I had a look at his RPM last night. It includes a patch for gcc4, which > > fails to build R with the fairly aggressive optimizations used by > > rpmbuild. ("-O2 -D_FORTIFY_SOURCE=2" will reproduce the bug, IIRC, but > > I'm not upgrading my work PC just yet, so I can't be sure). I folded > > this into R-patched. It's a shame he didn't send a bug report or, if he > > did, I missed it. > > Looks to me that this is bug in gcc4, not in R. (It's not actually an > optimization.) I've resisted making any such changes until gcc 4.0.1 is > released - and that is held up on some outstanding bug fixes. > > (BTW, it is a really good idea to put a comment in the file as to why > unnecessary parentheses have been added.) Mea culpa. I have asked Tom Callaway whether this was intended as a bug-fix for R or a workaround. > It's a shame FC4 does not contain a well-tested high-quality compiler like > 3.4.4 or 3.3.6, especially a well-tested high-quality Fortran compiler. That's not what Fedora is for, as I was discussing with Marc. Fedora users are willing (although perhaps unthinking) participants in Red Hat's beta testing cycle. By the time the bleeding-edge technology in Fedora gets to Red Hat's paying customers, it is well-tested and high- quality. > > I also note he is using the patch that sets LANG=C, which is obsolete > > now that R supports utf-8 locales. I'll write to him (cc Marc) to let > > him know about these changes. > > > > The RedHat RPMS also use the shared library version of R. I've been > > thinking about making this change myself, despite the substantial speed > > penalty, since I've seen a growing number of people recompiling to get > > the shared library. The Red Hat choice forces my hand though: I don't > > want people upgrading from their R 2.1.0 to my R 2.1.1 and finding their > > installed packages don't work anymore. The $64,000 question is how many > > people are going to care about that 15-20% decrease in speed. Speak up > > now if it conce
Re: [Rd] a bug in glm.fit() (PR#7947)
On Thu, 2005-06-16 at 07:06 +0100, Prof Brian Ripley wrote: > What is the bug? > > This is the same model: the `intercept' term affects the null model, not > the actual model. Just look at all the output. I think the documentation is misleading (On a related issue, it still refers to the defunct glm.fit.null() function). I'll fix it. Luke, you be using glm() instead of glm.fit(). > On Thu, 16 Jun 2005 [EMAIL PROTECTED] wrote: > > > glm.fit() gave me the same AIC's regardless of TRUE or FALSE intercept > > option. > > > >> myX <- as.matrix(1:10) > >> myY <- 3+5*myX > >> foo <- glm.fit(x=myX, y=myY, family = gaussian(link = "identity"), > >> intercept=TRUE) > >> foo$aic > > [1] 38.94657 > >> foo <- glm.fit(x=myX, y=myY, family = gaussian(link = "identity"), > >> intercept=FALSE) > >> foo$aic > > [1] 38.94657 > >> AIC(lm(myY~0+myX, data=data.frame(myY,myX))) > > [1] 38.94657 > >> AIC(lm(myY~1+myX, data=data.frame(myY,myX))) > > [1] -650.9808 > --- This message and its attachments are strictly confidential. If you are not the intended recipient of this message, please immediately notify the sender and delete it. Since its integrity cannot be guaranteed, its content cannot involve the sender's responsibility. Any misuse, any disclosure or publication of its content, either whole or partial, is prohibited, exception made of formally approved use __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Failed "make check" under Fedora Core 4 (PR#7979)
On Wed, 2005-06-29 at 19:55 +0200, [EMAIL PROTECTED] wrote: > I downloaded R v2.1.1 earlier this morning to compile under Fedora Core > 4. It compiled without incident, but 'make check' failed. Below is the > relevant part of its report. Is this a known problem? It is known that gcc 4.0.0 is buggy. The current recommendation, from Brian Ripley, is to use gcc 3.4.4 instead. We are awaiting the bug-fix release gcc 4.0.1. R compiled with the gcc 4.0.0 provided on FC4 does pass "make check- all". My guess is that your locally compiled version does not have the patches applied by Red Hat. Martyn > I used a locally compiled version of GCC v4.0.0 that reports > > [EMAIL PROTECTED] R-2.1.1]$ gcc -v > Using built-in specs. > Target: i686-pc-linux-gnu > Configured with: ../gcc-4.0.0/configure --enable-languages=c,c++,f95,java > Thread model: posix > gcc version 4.0.0 > [EMAIL PROTECTED] R-2.1.1]$ > > Kent Holsinger > [EMAIL PROTECTED] > > make[3]: Entering directory `/home/kent/source-arc/R-2.1.1/tests' > running code in 'eval-etc.R' ... OK > comparing 'eval-etc.Rout' to './eval-etc.Rout.save' ...2a3,157 > > > eval / parse / deparse / substitute etc > > > > > > ##- From: Peter Dalgaard BSA <[EMAIL PROTECTED]> > > > ##- Subject: Re: source() / eval() bug ??? (PR#96) > > > ##- Date: 20 Jan 1999 14:56:24 +0100 > > > e1 <- parse(text='c(F=(f <- .3), "Tail area" = 2 * if(f < 1) 30 > else 90)')[[1]] > > > e1 > > c(F = (f <- 0.3), "Tail area" = 2 * if (f < 1) 30 else 90) > > > str(eval(e1)) > > Named num [1:2] 0.3 60 > > - attr(*, "names")= chr [1:2] "F" "Tail area" > > > mode(e1) > > [1] "call" > > > > > > ( e2 <- quote(c(a=1,b=2)) ) > > c(a = 1, b = 2) > > > names(e2)[2] <- "a b c" > > > e2 > > c("a b c" = 1, b = 2) > > > parse(text=deparse(e2)) > > expression(c("a b c" = 1, b = 2)) > > > > > > ##- From: Peter Dalgaard BSA <[EMAIL PROTECTED]> > > > ##- Date: 22 Jan 1999 11:47 > > > > > > ( e3 <- quote(c(F=1,"tail area"=pf(1,1,1))) ) > > c(F = 1, "tail area" = pf(1, 1, 1)) > > > eval(e3) > > F tail area > > 1.0 0.5 > > > names(e3) > > [1] "" "F" "tail area" > > > > > > names(e3)[2] <- "Variance ratio" > > > e3 > > c("Variance ratio" = 1, "tail area" = pf(1, 1, 1)) > > > eval(e3) > > Variance ratio tail area > >1.00.5 > > > > > > > > > ##- From: Peter Dalgaard BSA <[EMAIL PROTECTED]> > > > ##- Date: 2 Sep 1999 > > > > > > ## The first failed in 0.65.0 : > > > attach(list(x=1)) > > > evalq(dim(x) <- 1,as.environment(2)) > > > dput(get("x", env=as.environment(2)), control="all") > > structure(1, .Dim = as.integer(1)) > > > > > > e <- local({x <- 1;environment()}) > > > evalq(dim(x) <- 1,e) > > > dput(get("x",env=e), control="all") > > structure(1, .Dim = as.integer(1)) > > > > > > ### Substitute, Eval, Parse, etc > > > > > > ## PR#3 : "..." matching > > > ## Revised March 7 2001 -pd > > > A <- function(x, y, ...) { > > + B <- function(a, b, ...) { match.call() } > > + B(x+y, ...) > > + } > > > (aa <- A(1,2,3)) > > B(a = x + y, b = 3) > > > all.equal(as.list(aa), > > + list(as.name("B"), a = expression(x+y)[[1]], b = 3)) > > [1] TRUE > > > (a2 <- A(1,2, named = 3)) #A(1,2, named = 3) > > B(a = x + y, named = 3) > > > all.equal(as.list(a2), > > + list(as.name("B"), a = expression(x+y)[[1]], named = 3)) > > [1] TRUE > > > > > > CC <- function(...) match.call() > > > DD <- function(...) CC(...) > > > a3 <- DD(1,2,3) > > > all.equal(as.list(a3), > > + list(as.name("CC"), 1, 2, 3)) > > [1] TRUE > > > > > > ## More dots issues: March 19 2001 -pd > > > ## Didn't work up to and including 1.2.2 > > > > > > f <- function(...) { > > + val <- match.call(expand.dots=F)$... > > + x <- val[[1]] > > + eval.parent(substitute(missing(x))) > > + } > > > g <- function(...) h(f(...)) > > > h <- function(...) list(...) > > > k <- function(...) g(...) > > > X <- k(a=) > > > all.equal(X, list(TRUE)) > > [1] TRUE > > > > > > ## Bug PR#24 > > > f <- function(x,...) substitute(list(x,...)) > > > deparse(f(a, b)) == "list(a, b)" && > > + deparse(f(b, a)) == "list(b, a)" && > > + deparse(f(x, y)) == "list(x, y)" && > > + deparse(f(y, x)) == "list(y, x)" > > [1] TRUE > > > > > > tt <- function(x) { is.vector(x); deparse(substitute(x)) } > > > a <- list(b=3); tt(a$b) == "a$b" # tends to break when ... > > [1] TRUE > > > > > > > > > ## Parser: > > > 1 < > > + 2 > > [1] TRUE > > > 2 <= > > + 3 > > [1] TRUE > > > 4 >= > > + 3 > > [1] TRUE > > > 3 > > > + 2 > > [1] TRUE > > > 2 == > > + 2 > > [1] TRUE > > > ## bug till ... > > > 1 != > > + 3 > > [1] TRUE > > > > > > all(NULL == NULL) > > [1] TRUE > > > > > > ## PR #656 (related) > > > u <- runif(1); length(find(".Random.seed")) == 1 > > [1] TRUE > > > > > > MyVaR <<- "val";length(find("MyVaR")) == 1 > > [1]
Re: [Rd] looks in liblapack.a not liblapack.so
On Sat, 2005-09-17 at 17:19 -0500, Charles Geyer wrote: > I can't compile R-alpha on AMD 64. Rather than include a 1400 line script > I have put it on the web > > http://www.stat.umn.edu/~charlie/typescript.txt > > way down near the bottom it fails building lapack.so > > gcc -shared -L/usr/local/lib64 -o lapack.so Lapack.lo-llapack -lblas > -lg2c -lm -lgcc_s > > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: > > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../lib64/liblapack.a(dgecon.i): > relocation R_X86_64_32 against `a local symbol' can not be used when making > a shared object; recompile with -fPIC > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../lib64/liblapack.a: > could not read symbols: Bad value > > The 'recompile with -fPIC' is bullsh*t. The problem is that is is looking > in /usr/lib64/liblapack.a rather than /usr/lib64/liblapack.so.3 both of which > exist. Some searching for this error message on Google shows a lot of > questions about this problem but no solution that I found other than > > rm /usr/lib64/liblapack.a > > which I don't consider a solution. It will link with the .so as the bottom > of the script shows > > snowbank$ cd src/modules/lapack > snowbank$ gcc -shared -o lapack.so Lapack.lo -llapack -lblas -lg2c -lm > -lgcc_s > > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../x86_64-suse-linux/bin/ld: > > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../lib64/liblapack.a(dgecon.i): > relocation R_X86_64_32 against `a local symbol' can not be used when making > a shared object; recompile with -fPIC > /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../lib64/liblapack.a: > could not read symbols: Bad value > collect2: ld returned 1 exit status > snowbank$ gcc -shared -o lapack.so Lapack.lo /usr/lib64/liblapack.so.3 > -lblas -l g2c -lm -lgcc_s > > No problems with the second link. > > So what do I do? liblapack.so is there. I've linked other (non-R) programs > to it. So it SHOULD work with R. > > Either I can't read (possible) or the solution to this isn't in the gcc info > pages. > > System (more info in typescript). > >AMD 64 >SuSE linux 9.3 >GCC 3.3.5 > > I also observed the same problem with R-2.1.1 but didn't get around to > debugging it until today. > > It occurred to me that /usr/local/lib/liblapack.so.3 which is 32 bit > (because right now we are running only one R on both 32 and 64 bit and > that's where the 32 bit R finds it's shared libraries), but I don't > think that's the problem. Well maybe it is. How do I tell configure > NOT to add /user/local ? You would need to modify the LDFLAGS and CPPFLAGS environment variables, as these default to -L/usr/local/lib and -I/usr/local/include respectively. See Appendix B.3.3 of the R Installation and Administration manual, which gives a warning about 64-bit systems. You can also use the --with-readline configure flag to specify the exact location of the readline library you wish to use. I hope this helps. Martyn --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] looks in liblapack.a not liblapack.so
On Mon, 2005-09-19 at 17:10 +0200, Peter Dalgaard wrote: > Martyn Plummer <[EMAIL PROTECTED]> writes: > > > > The 'recompile with -fPIC' is bullsh*t. The problem is that is is looking > > > in /usr/lib64/liblapack.a rather than /usr/lib64/liblapack.so.3 both of > > > which > > > exist. Some searching for this error message on Google shows a lot of > > > questions about this problem but no solution that I found other than > > > > > > rm /usr/lib64/liblapack.a > > > > > > which I don't consider a solution. It will link with the .so as the > > > bottom > > > of the script shows > > > You would need to modify the LDFLAGS and CPPFLAGS environment variables, > > as these default to -L/usr/local/lib and -I/usr/local/include > > respectively. See Appendix B.3.3 of the R Installation and > > Administration manual, which gives a warning about 64-bit systems. > > > > You can also use the --with-readline configure flag to specify the exact > > location of the readline library you wish to use. > > How did _readline_ get into this? I meant --with-lapack. My fingers have their very own autocomplete feature, which is a little buggy. > As a curiosity, I tried looking at what Fedora Core 4 does with this. > So I looked for liblapack.a with locate, and it found one in /usr/lib > (on a 32bit system). Then I went to have a closer look at the library > and it turned out not to be there -- apparently the recent update to > lapack had wiped it out, but the locate database was not yet > rebuilt... Fedora have just split off a separate lapack-devel package containing the static library and the symlink liblapack.so. (Mandrake/Mandriva has been doing this for some time. I don't know about SuSE). The up2date service will recognize that it needs to update lapack, but I guess that it won't install lapack-devel, as it doesn't know you need it. It might have been better to do this in the next release, rather than as an update to FC4, but there you go. Better install lapack-devel manually. > This sort of suggests to me that removing the .a file might actually > be a sensible thing to do on SuSE as well. M. --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] How do I install rbugs ? (PR#8183)
On Thu, 2005-10-06 at 11:33 +0200, Roger Bivand wrote: > On Thu, 6 Oct 2005 [EMAIL PROTECTED] wrote: > > > Dear rbugs, > > > > 6 month a ago the homepage of R showed how I could install and do a = > > test-Run of rbugs. This information is not available anymore. I would = > > appreciate if you could help me installing rbugs and making wun = > > properly. > > Please NEVER use the R-bugs reporting system unless you are absolutely > sure that you are reporting a major failure in the software - do read the > guidance notes! Especially on release day, reporting non-bugs does raise > pressure unnecessarily. > > Google on "rbugs" gives: > > http://cran.r-project.org/src/contrib/Descriptions/rbugs.html > > Please also read http://www.r-project.org/posting-guide.html carefully. This looks like a semantic error. The address R-bugs is not a mailing list for the discussion of R and BUGS (Bayesian Inference using Gibbs Sampling), but for reporting "bugs" (i.e. programming errors) in R. A moment of research would have revealed this. Ulf is probably thinking of the article published in R news by Sun Yan, co-author of the rbugs package. It is still available: http://cran.r-project.org/doc/Rnews/ Martyn --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Post processing need for installing packages in rpms.
On Tue, 2005-10-11 at 13:11 +0100, Prof Brian Ripley wrote: > On Tue, 11 Oct 2005, [UTF-8] José Matos wrote: > > > Hello,I maintain some packages in Fedora Extras for R related > > modules. > >Until R 2.2.0 I used for post processing (both after installing > > andremoving the package) the following lines: > > %{_bindir}/R CMD perl %{_libdir}/R/share/perl/build-help.pl --htmllistscat > > %{_libdir}/R/library/*/CONTENTS > %{_libdir}/R/doc/html/search/index.txt > >Typically %{_bindir} is /usr/bin and %{_libdir} is /usr/lib > > or/usr/lib64 > >The purpose of those lines is to enable the access to the > > moduledocumentation from R help. The first refers to html and the second to > > thetext help variant. > >With R 2.2.0 build-help.pl no longer has the --htmllists option. > > Isthere any easy replacement, or is there any other method to achieve the > > sameresults? > > It is no longer needed: the information is now built by R at runtime. > > >FWIW, I have searched trough the release notes as well as through > > thedocumentation for sys admins and for package developers without > > anysuccess. > > For an outsider it might not be obvious that the NEWS entry > > o R_HOME/doc/html/packages.html is now remade by R not Perl code. > This may result in small changes in layout and a change in > encoding (to UTF-8 where supported). > > refers to this. (Note that build-help.pl --htmllists was never documented > as part of R's API.) > > I checked Martyn Plummer's R.spec on CRAN, but that is out-of-date. > Please look at the R.spec in his current SRPM, which works for me. Sorry about that. I had forgotten that I put a lone spec file in there. I have removed it, since it seems likely that I will do the same thing in the future. The perl hack used to be required because rpmbuild installs R in a build root which is different from the final installation directory. The reason you don't see an error message from the post install script is because it is redirected to /dev/null. I'll fix this in the next version. Martyn --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 2.2.2-1 RPM build problem and solution on RH AS 4 x86_64
On Wed, 2006-01-11 at 17:26 -0500, Dan Lipsitt wrote: > I have a dual Xeon x86_64 system running Red Hat AS 4. There are no > x86_64 rpms in http://cran.us.r-project.org/bin/linux/redhat/el4/ (the > i386 ones are a point release behind anyway) , and the fc4 rpms have a > whole web of dependencies I don't want to pull in. So I decided to > build > http://cran.us.r-project.org/bin/linux/redhat/SRPMS/R-2.2.1-1.fc3.src.rpm > . > > When I ran rpmbuild. one of the make-check tests failed. > > from /BUILD/R-2.2.1/tests/p-r-random-tests.Rout.fail: > > dkwtest("weibull",shape = 1) > weibull(shape = 1) FAILED > Error in dkwtest("weibull", shape = 1) : dkwtest failed > Execution halted > > I was able to build the rpm after removing "--enable-r-shlib" from the > spec file. > > http://cran.us.r-project.org/bin/linux/redhat/SRPMS/ReadMe says: > "The new SRPM for R 2.1.1 builds the shared library version of R. This is, > unfortunately, slower than the version without the shared library." > > It doesn't say why, if it's slower, it builds it that way. Can anyone > shed some light on the subject? Well, I did it because people were asking for it. You need the shared library to use embedded R or use a GUI. I considered that most people using R on the command line would pay the speed penalty (or not notice) and that people who really need the speed could always compile their own. The penalty is not so bad (~10%) on x86_64 anyway. Martyn --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Provide both shlib and standard versions of R?
On Mon, 2006-01-16 at 00:45 -0600, Bo Peng wrote: > > then either build your own with correct options or talk to your > > distribution's packaging team. > > It seems that my knowledge about this option is outdated. When I > first encountered this problem two years ago, the R/rpm distribution > came with no libR.so. I was told that --enable-R-shlib would lead to > 10% - 20% performance loss, and I had to re-compile R if I need to > embed it. > > So I guess performance is no longer an issue and shared libraries are > provided as default on all platforms now? I certainly welcome this > change and I apologize for my unfounded accusation to R. What changed was that a sufficient number of people asked me to create an RPM with the shared library and I changed my mind. The aim of the precompiled binaries is to satisfy most of the people most of the time, and when I get repeated requests for the same feature, I have to bear that in mind. People who require optimal performance can still compile their own. As for the idea of compiling two distinct binaries packages for R, I am not especially keen, and not just out of laziness. The problem is that R packages depend on libR.so, when it exists, so if you uninstall R with a shared library and then install R without the shared library you get a broken system. You can look at the CAPABILITIES file in the same directory as the RPM to see how it was compiled. Martyn > BTW, shouldn't --enable-R-shlib be yes by default during ./configure?. > > Cheers, > Bo > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Bug report - duplicate row names with as.data.frame()
On Thu, 2018-03-01 at 09:36 -0500, Ron wrote: > Hello, > > I'd like to report what I think is a bug: using as.data.frame() we can > create duplicate row names in a data frame. R version 3.4.3 (current stable > release). > > Rather than paste code in an email, please see the example formatted code > here: > https://stackoverflow.com/questions/49031523/duplicate-row-names-in-r-using-as-data-frame > > I posted to StackOverflow, and consensus was that we should proceed with > this as a bug report. Yes that is definitely a bug. The end of the as.data.frame.matrix method has: attr(value, "row.names") <- row.names class(value) <- "data.frame" value Changing this to: class(value) <- "data.frame" row.names(value) <- row.names value ensures that the row.names<-.data.frame method is called with its built -in check for duplicate names. There are quite a few as.data.frame methods so this could be a recurring problem. I will check. Martyn > Thanks, > Ron > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Makevars CXX_STD variable ignored when no *.cpp files in src/
You are not the first person to report this, but last time when I tried it myself I could not reproduce the bug. Let me try it again. Martyn On Fri, 2018-03-02 at 09:26 +0100, Radosław Piliszek wrote: > Hello! > > I might have found a bug in the way that R handles Makevars file when > building a package. > > Value of variable CXX_STD is ignored - i.e. R does not use the > correct > compiler/flags - if there are no *.cpp files directly in the src/ > directory (e.g. all *.cpp are in subdirectories, and OBJECTS variable > is set accordingly). Adding a bogus *.cpp file fixes this issue. > However, this is not very obvious (I would dare saying it is not > obvious at all) and I spent quite a time checking what went wrong > after I organized my src files better. :-) > > Kind regards, > Radosław Piliszek > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Makevars CXX_STD variable ignored when no *.cpp files in src/
Radoslaw sent me a reproducible example. I have been able to identify the problem and fix it. I have copied in Alexander Loboda who previously reported the same problem. Briefly, the tools package relied on the presence of files with extension .cpp or .cc in the src directory to determine whether the C++ compiler is required. In the absence of any such files, the code to set the C++ compiler flags correctly was never run. Martyn On Fri, 2018-03-02 at 09:42 +, Martyn Plummer wrote: > You are not the first person to report this, but last time when I tried > it myself I could not reproduce the bug. Let me try it again. > > Martyn > > On Fri, 2018-03-02 at 09:26 +0100, Radosław Piliszek wrote: > > Hello! > > > > I might have found a bug in the way that R handles Makevars file when > > building a package. > > > > Value of variable CXX_STD is ignored - i.e. R does not use the > > correct > > compiler/flags - if there are no *.cpp files directly in the src/ > > directory (e.g. all *.cpp are in subdirectories, and OBJECTS variable > > is set accordingly). Adding a bogus *.cpp file fixes this issue. > > However, this is not very obvious (I would dare saying it is not > > obvious at all) and I spent quite a time checking what went wrong > > after I organized my src files better. :-) > > > > Kind regards, > > Radosław Piliszek > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] package test failed on Solaris x86 -- help needed for debugging
Dear Thomas, Is this the deSolve package? http://www.r-project.org/nosvn/R.check/r-patched-solaris-x86/deSolve-00check.html I can help you with that. It does pass R CMD check on my OpenSolaris installation, but I am getting some compiler warnings. I will send you details. Martyn On Thu, 2010-09-16 at 11:16 +0200, Thomas Petzoldt wrote: > Dear R developers, > > we have currently a 'mysterious' test problem with one package that > successfully passed the tests on all platforms, with the only exception > of Solaris x86 where obviously one of our help examples breaks the CRAN > test. > > As we don't own such a machine I want to ask about a possibility to run > a few tests on such a system: > > r-patched-solaris-x86 > > An even more recent version of R on the same OS (Solaris 10) and with > the same compiler (Sun Studio 12u1) would help also. > > Any assistance is appreciated > > > Thomas Petzoldt > > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] GPL and R Community Policies (Rcpp)
Dear Dominick, The R community does not have a conflict resolution mechanism. We are quite used to disputes that end with one party, usually a recognized authority, saying "No, you are objectively, verifiably wrong". We cannot, as a group, deal with anything else. Everybody knows that you have an acrimonious relationship with the current developers of Rcpp (and if they don't then a cursory look at the rcpp-devel archives will confirm this). The issue of the acknowledgment that you are complaining about is merely a symptom of the further deterioration of this relationship. Appeals to authority or public opinion are not going to help you obtain satisfaction. Having your free software taken up and developed by other people is not the worst thing that can happen. For a free software developer, the worst thing that can happen is that they get run over by a proverbial bus and their software dies with them. Martyn On Wed, 2010-12-01 at 13:21 -0500, Dominick Samperi wrote: > This post asks members of the R community, users and developers, > to comment on issues related to the GNU Public License > and R community policies more generally. > > The GPL says very little about protecting the the rights of original > contributors by not disseminating misleading information about them. > Indeed, for pragmatic reasons it effectively assumes that original authors > have no rights regarding their GPL-ed software, and it implicitly leaves > it up to the community of developers and users to conduct themselves in a > fair and > reasonable manner. > > After discussing these matters with Richard Stallman I think > we more-or-less agreed that a GPL "copyright" notice is nothing > more than a way to deputise people to serve as protectors of the > principles of the Free Software Foundation (FSF). It has nothing to > do with protecting the "rights" or the "ideas" of original > contributors. There is no peer review, no requirement to > explain your contributions, and anybody can essentially > do as they please with the software provided they retain > the copyright/FSF deputy notice---of course, you can > always work-around this last restriction by modifying the > implementation and placing it in a new file, because > nobody is checking (GPL doesn't require it). > > The GPL is all about "freedom", not responsibility. It is entirely > focused on "deregulation", not on the protection of intellectual > property or professional reputations. It serves the useful purpose > of making great software more widely available, but it does not > dictate how people should behave and should not be used > as a moral compass. (See recent book titled > "You are not a gadget: a manifesto", a rejoinder to the > GNU manifesto.) > > As a counterbalance I think the community of developers and > users need to play a more active role in the evolution of > shared values and expectations. In this spirit I respectfully request > that the R community consider the following. > > The author line of the latest release of the R package > Rcpp (0.8.9) was revised as follows: > > From: "based on code written during 2005 and 2006 by Dominick Samperi" > > To: "a small portion of the code is based on code written during 2005 and > 2006 by Dominick Samperi" > > As it is highly unusual (and largely impossible) to quantify the relative > size of the the contribution made by each author of GPL'ed software, this > has > effectively changed an acknowledgment into a disparaging remark. It > is also misleading, because I am the original creator of the Rcpp library > and package (it was forked by Dirk Eddelbuettel and is now effectively > part of R core development). Incidentally, the README file for > Rcpp 0.6.7 shows that my contributions and influence were not > confined to the period 2005-2006. > > A look at the change history of Rcpp would quickly reveal that to be > fair other authors of Rcpp (and perhaps other R package authors) > should have their contributions qualified with "a small portion of the > code", > or "administered by", but this is precisely the kind of monitoring that > inspired Richard Stallman to say we must "chuck the masks" in the > GNU Manifesto. > > It is obviously a great benefit for the R community to have Rcpp actively > supported by the R core team. I am very grateful for this. What I do > have a problem with is the fact that my contributions are disparaged > by people who have benefited from my past work. > > It seems to me that there are two possible resolutions. First, if my > name is used in the Rcpp package it should be used to provide fair, > accurate, and courteous acknowledgement for my past contributions. > Second, if this is not possible, then my name should not be used at all. > If the second option is selected then the only place my name should > appear is in the copyright ("deputy") notices. > > Incidentally, the fact that the word "copyright" is profoundly misleading in > the context of GPL is not a new idea, and
Re: [Rd] Build R with MKL and ICC
On Wed, 2015-09-02 at 20:49 +0200, arnaud gaboury wrote: > On Wed, Sep 2, 2015 at 7:35 PM, arnaud gaboury > wrote: > > After a few days of reading and headache, I finally gave a try at > > building R from source with Intel MKL and ICC. Documentation and posts > > on this topic are rather incomplete, sometime fantasist et do not give > > much explanations about configure options. > > As I am not sure if mine is correct, I would appreciate some advices and > > hints. > > > > OS: Fedora 22 > > parallel_studio_xe_2016 > > Hardware : 8 Thread(s) per core: 2 Vendor ID: GenuineIntel Model name: > > Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz - Sandybridge > > R-3.2.2 > > > > Here is my build configuration: > > > > --- > > source /opt/intel/compilers_and_libraries_2016/linux/mkl/bin/mklvars.sh > > intel64 > > source /opt/intel/bin/compilervars.sh intel64 > > _mkllibpath=$MKLROOT/lib/intel64 > > _icclibpath=$MKLROOT/linux/compiler/lib > > export LD_LIBRARY_PATH=${_mkllibpath}:${_icclibpath} > > export MKL="-L${_mkllibpath} -L${_icclibpath} -lmkl_intel_lp64 > > -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" > > export CC="icc" > > export F77="ifort" > > export CXX="icpc" > > export AR="xiar" > > export LD="xild" > > export CFLAGS="-O3 -ipo -openmp -parallel -xAVX" > > export CXXFLAGS="-O3 -ipo -openmp -parallel -xAVX" > > export FFLAGS="-O3 -ipo -openmp -parallel -xAVX" > > export MAIN_LDFLAGS='-openmp' > > ./configure --with-lapack --with-blas="$MKL" --enable-R-shlib > > --enable-memory-profiling --enable-openmp --enable-BLAS-shlib > > --enable-lto F77=${F77} FC=${F77} > > > > > > After I run ./configure, it seems from config.log everything is fine: > > > > checking for dgemm_ in > > result: yes > > > > checking whether double complex BLAS can be used > > result: yes > > > > checking whether the BLAS is complete > > result: yes > > > > The only error I can see is ld complaining about not finding -lRblas > > > > > > Then run $ make with no errors. > > Now, with no $ make install, I get this: > > > > > > $ ldd bin/exec/R > > linux-vdso.so.1 (0x7ffe073f3000) > > libR.so => /usr/lib64/R/lib/libR.so (0x7f43939e6000) > > libRblas.so => not found > > libm.so.6 => /lib64/libm.so.6 (0x7f43936de000) > > libiomp5.so => > > /opt/intel/compilers_and_libraries_2016.0.109/linux/compiler/lib/intel64/libiomp5.so > > (0x7f439339c000) > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f4393185000) > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f4392f69000) > > libc.so.6 => /lib64/libc.so.6 (0x7f4392ba8000) > > libdl.so.2 => /lib64/libdl.so.2 (0x7f43929a4000) > > libblas.so.3 => /lib64/libblas.so.3 (0x7f439274b000) > > libgfortran.so.3 => /lib64/libgfortran.so.3 (0x7f439241f000) > > libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f43921e) > > libreadline.so.6 => /lib64/libreadline.so.6 (0x7f4391f96000) > > libtre.so.5 => /lib64/libtre.so.5 (0x7f4391d85000) > > libpcre.so.1 => /lib64/libpcre.so.1 (0x7f4391b15000) > > liblzma.so.5 => /lib64/liblzma.so.5 (0x7f43918ef000) > > libbz2.so.1 => /lib64/libbz2.so.1 (0x7f43916de000) > > libz.so.1 => /lib64/libz.so.1 (0x7f43914c8000) > > librt.so.1 => /lib64/librt.so.1 (0x7f43912c) > > libicuuc.so.54 => /lib64/libicuuc.so.54 (0x7f4390f2e000) > > libicui18n.so.54 => /lib64/libicui18n.so.54 (0x7f4390ad7000) > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f43908b5000) > > /lib64/ld-linux-x86-64.so.2 (0x5557e2243000) > > libtinfo.so.5 => /lib64/libtinfo.so.5 (0x7f439068a000) > > libicudata.so.54 => /lib64/libicudata.so.54 (0x7f438ec5f000) > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f438e8dc000) > > --- > > > > > > Now a few questions: > > > EDIT > > 1- am I not supposed to see these libraries in the list ? > > libmkl_intel_lp64.so > > libmkl_intel_thread.so > > libmkl_core.so You are looking in the wrong place. With the option --enable-BLAS-shlib, R is linked to MKL via the library libRblas.so which you will find in the directory "lib" after building R. > > Or do I need to run $make install before ldd? > > > > 2- when visiting Intel MKL link advisor[0], here is what I get as > > configure and make options: > > Linking: -L${MKLROOT}/lib/intel64 -lmkl_intel_ilp64 -lmkl_core > > -lmkl_intel_thread -lpthread -lm > > Compiler options: -DMKL_ILP64 -qopenmp -I${MKLROOT}/include > > > > What is the difference between -openmp and -qopenmp? Shall I use > > indeed the above compiler options? The option -qopenmp replaces -openmp, which is deprecated. This is in the man page for icc. Martyn > > Thank you for help in this difficult topic for me. > > > > -
Re: [Rd] Build R with MKL and ICC
On Sat, 2015-09-05 at 11:53 +0200, arnaud gaboury wrote: > On Fri, Sep 4, 2015 at 5:58 PM, Martyn Plummer wrote: > > On Wed, 2015-09-02 at 20:49 +0200, arnaud gaboury wrote: > >> On Wed, Sep 2, 2015 at 7:35 PM, arnaud gaboury > >> wrote: > >> > After a few days of reading and headache, I finally gave a try at > >> > building R from source with Intel MKL and ICC. Documentation and posts > >> > on this topic are rather incomplete, sometime fantasist et do not give > >> > much explanations about configure options. > >> > As I am not sure if mine is correct, I would appreciate some advices and > >> > hints. > >> > > >> > OS: Fedora 22 > >> > parallel_studio_xe_2016 > >> > Hardware : 8 Thread(s) per core: 2 Vendor ID: GenuineIntel Model name: > >> > Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz - Sandybridge > >> > R-3.2.2 > >> > > >> > Here is my build configuration: > >> > > >> > --- > >> > source /opt/intel/compilers_and_libraries_2016/linux/mkl/bin/mklvars.sh > >> > intel64 > >> > source /opt/intel/bin/compilervars.sh intel64 > >> > _mkllibpath=$MKLROOT/lib/intel64 > >> > _icclibpath=$MKLROOT/linux/compiler/lib > >> > export LD_LIBRARY_PATH=${_mkllibpath}:${_icclibpath} > >> > export MKL="-L${_mkllibpath} -L${_icclibpath} -lmkl_intel_lp64 > >> > -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" > >> > export CC="icc" > >> > export F77="ifort" > >> > export CXX="icpc" > >> > export AR="xiar" > >> > export LD="xild" > >> > export CFLAGS="-O3 -ipo -openmp -parallel -xAVX" > >> > export CXXFLAGS="-O3 -ipo -openmp -parallel -xAVX" > >> > export FFLAGS="-O3 -ipo -openmp -parallel -xAVX" > >> > export MAIN_LDFLAGS='-openmp' > >> > ./configure --with-lapack --with-blas="$MKL" --enable-R-shlib > >> > --enable-memory-profiling --enable-openmp --enable-BLAS-shlib > >> > --enable-lto F77=${F77} FC=${F77} > >> > > >> > > >> > After I run ./configure, it seems from config.log everything is fine: > >> > > >> > checking for dgemm_ in > >> > result: yes > >> > > >> > checking whether double complex BLAS can be used > >> > result: yes > >> > > >> > checking whether the BLAS is complete > >> > result: yes > >> > > >> > The only error I can see is ld complaining about not finding -lRblas > >> > > >> > > >> > Then run $ make with no errors. > >> > Now, with no $ make install, I get this: > >> > > >> > > >> > $ ldd bin/exec/R > >> > linux-vdso.so.1 (0x7ffe073f3000) > >> > libR.so => /usr/lib64/R/lib/libR.so (0x7f43939e6000) > >> > libRblas.so => not found > >> > libm.so.6 => /lib64/libm.so.6 (0x7f43936de000) > >> > libiomp5.so => > >> > /opt/intel/compilers_and_libraries_2016.0.109/linux/compiler/lib/intel64/libiomp5.so > >> > (0x7f439339c000) > >> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f4393185000) > >> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f4392f69000) > >> > libc.so.6 => /lib64/libc.so.6 (0x7f4392ba8000) > >> > libdl.so.2 => /lib64/libdl.so.2 (0x7f43929a4000) > >> > libblas.so.3 => /lib64/libblas.so.3 (0x7f439274b000) > >> > libgfortran.so.3 => /lib64/libgfortran.so.3 (0x7f439241f000) > >> > libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f43921e) > >> > libreadline.so.6 => /lib64/libreadline.so.6 (0x7f4391f96000) > >> > libtre.so.5 => /lib64/libtre.so.5 (0x7f4391d85000) > >> > libpcre.so.1 => /lib64/libpcre.so.1 (0x7f4391b15000) > >> > liblzma.so.5 => /lib64/liblzma.so.5 (0x7f43918ef000) > >> > libbz2.so.1 => /lib64/libbz2.so.1 (0x7f43916de000) > >> > libz.so.1 => /lib64/libz.so.1 (0x7f43914c8000) > >> > librt.so.1 => /lib64/librt.so.1 (0x7f43912c) > >> > li
Re: [Rd] authorship and citation
On 06 Oct 2015, at 14:09, S Ellison wrote: >> The former co-author contributed, so he is still author and probably >> copyright >> holder and has to be listed among the authors, otherwise it would be a CRAN >> policy violation ... > > It's a bit of a philosophical question right now, but at some point in a > developing package's life - particularly one that starts small but is > subsequently refactored in growth - there may be no code left that was > contributed by the original developer. That is indeed the philosophical question of the ship of Theseus. Martyn > Is there a point at which the original developer should not stay on the > author list? > > S Ellison > > > > *** > This email and any attachments are confidential. Any u...{{dropped:19}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Small request of a feature improvement in the next version of R
On Mon, 2015-11-16 at 20:11 -0500, Paul Grosu wrote: > Hi Everyone, > > Sorry to bother the list with this small request, but I've run into this > issue and was wondering if it could be fixed in the next version of R. > Sorry if it was raised in a previous thread: > > So when I try the following I get an error: > > > m <- list() > > m[["A3V6HVSALQ835D"]][['profiles']] <- 3 > > m[["A3V6HVSALQ835D"]][['stars']] <- c(1, 23) > Error in m[["A3V6HVSALQ835D"]][["stars"]] <- c(1, 23) : > more elements supplied than there are to replace > > As does the following: > > > m <- list() > > m[["A3V6HVSALQ835D"]][['profiles']] <- c() > > m[["A3V6HVSALQ835D"]][['stars']] <- c() > > m[["A3V6HVSALQ835D"]][['profiles']] <- 3 > > m[["A3V6HVSALQ835D"]][['stars']] <- c(1, 23) > Error in m[["A3V6HVSALQ835D"]][["stars"]] <- c(1, 23) : > more elements supplied than there are to replace > > But when I reverse the order, I don't: > > > m <- list() > > m[["A3V6HVSALQ835D"]][['stars']] <- c(1, 23) > > m[["A3V6HVSALQ835D"]][['profiles']] <- 3 > > As doesn't the following, with the order reversed for the assignment: > > > m <- list() > > m[["A3V6HVSALQ835D"]][['profiles']] <- c() > > m[["A3V6HVSALQ835D"]][['stars']] <- c() > > m[["A3V6HVSALQ835D"]][['stars']] <- c(1, 23) > > m[["A3V6HVSALQ835D"]][['profiles']] <- 3 > > And when I instantiate it in this way, it does not with the original order: > > > m <- list() > > m[["A3V6HVSALQ835D"]][['profiles']] <- c() > > m[["A3V6HVSALQ835D"]][['stars']] <- list() > > m[["A3V6HVSALQ835D"]][['profiles']] <- 3 > > m[["A3V6HVSALQ835D"]][['stars']] <- c(1, 23) > > The request is so that order-specific assignments would not throw an error, > and I am using version 3.2.2 of R. Your example combines two nested calls to the replacement function "[[<-". It is a lot easier to understand what is going wrong if you break this down into two separate function calls. First, the element of m that you want to modify is NULL: > m <- list() > m[["A3V"]] NULL So the expression > m[["A3V"]][['profile']] <- 3 is equivalent to: > tmp <- NULL > tmp[['profile']] <- 3 > m[["A3V"]] <- tmp Inspecting the result: > m $A3V profile 3 > class(m$A3V) [1] "numeric" So m$A3V is a numeric vector and not, as you expected, a list. This behaviour of "[[<-" when applied to NULL objects is documented on the help page: See help("[[<-") The solution is to create m[["A3V"]] as a list before modifying its elements: > m <- list() > m[["A3V"]] <- list() ... Martyn > Thank you, > Paul > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Multiple cores are used in simple for loop
On Fri, 2016-01-15 at 15:03 +0100, Daniel Kaschek wrote: > Dear all, > > I run different R versions (3.2.1, 3.2.2 and 3.2.3) on different > platforms (Arch, Ubuntu, Debian) with a different number of available > cores (24, 4, 24). The following line produces very different behavior > on the three machines: > > for(i in 1:1e6) {n <- 100; M <- matrix(rnorm(n^2), n, n); M %*% M} > > On the Ubuntu and Arch machine one core is used, but on the Debian > machine ALL cores are used with heavy "kernel time" vs. "normal time" > (red vs. green in htop). It seems that the number of cores used on > Debian is related to the size of the matrix. Reducing n from 100 to 4 > causes four cores to work. It depends on what backend R is using for linear algebra. Some will split large matrix calculations over multiple threads. On Debian, you can set the blas and lapack libraries to the implementation of your choice. https://wiki.debian.org/DebianScience/LinearAlgebraLibraries As far as I know reference blas and lapack are still single threaded. Alternatively, you may be able to control the maximum number of threads by setting and exporting an appropriate environment variable depending on what backend you are using, e.g. OPENBLAS_NUM_THREADS or MKL_NUM_THREADS. Martyn > A similar problem persists with the parallel package and mclapply(): > > library(parallel) > out <- mclapply(1:1e6, function(i) { n <- 100; M <- matrix(rnorm(n^2), > n, n); M %*% M }, mc.cores = 24) > > On Arch and Debian all 24 cores run and show a high kernel time vs. > normal time (all CPU bars in htop are 80% red). With mc.cores = 4 on > the Ubuntu system however, all four cores run at full load with almost > no kernel time but full normal time (all bars are green). > > Have you seen this problem before? Does anybody know how to fix it? > > Cheers, > Daniel > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Typo in C++11 Section of Writing R Extensions
Yes you are quite correct and this is the right place for reporting errors in the manuals. I have fixed it. Martyn From: R-devel on behalf of Jonathan Lisic Sent: 09 February 2016 20:31 To: r-devel@r-project.org Subject: [Rd] Typo in C++11 Section of Writing R Extensions Hi, I was reading through the R extensions website and noticed that the example code at the end of the section makes references to CXX11XSTD and CXX11XFLAGS, shouldn’t these be CXX1XSTD and CXX1XFLAGS respectfully? (on the second and fourth line) CXX1X=`"${R_HOME}/bin/R" CMD config CXX11X` CXX1XSTD=`"${R_HOME}/bin/R" CMD config CXX11XSTD` CXX="$(CXX1X) $(CXX1XSTD)" CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXX11XFLAGS` AC_LANG(C++) Sorry if this has been reported before, Jonathan Lisic __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] problem plotting "ts" in a data.frame
On Tue, 2016-02-09 at 16:56 -0600, Spencer Graves wrote: > Hello: > > >I'm having trouble plotting an object of class "ts" that is in a > data.frame. I can do it with(data.frame, plot(...)) but not with > plot(..., data.frame); see the example below. The plot function is generic so the actual function call depends on what arguments you give it: plot(y.ts) calls the plot.ts method from the stats package, whereas plot(y.ts ~ x) calls the plot.formula method from the graphics package. Only the plot.formula method has a data argument. What happens when you call plot(y1~x1, data=XY) is that first plot.formula is called and then this calls plot.ts with two arguments (XY$x1 and XY$y1) that are expected to be compatible time series. However, they are not. In fact x1 is numeric and when coerced to a time series it has no time points in common with y1 (which starts at time t=5). Hence the warning about non-intersecting series. Since non-overlapping series seems to be fatal in this context it might be a good idea to give an error at this point. Otherwise I think the function is behaving correctly. Martyn >This work around gets me past this problem. However, I thought > the R Core team might want to know about this if they don't already. > > >Thanks for all your work in making R and CRAN the great tools > that they are -- and I apologize for wasting your time if you are > already familiar with this. > > >Spencer Graves > > > > y.ts <- ts(2:4, 5) > > XY <- data.frame(x1=6:8, y1=y.ts) > > plot(y1~x1, XY) > Error in plot.window(...) : need finite 'xlim' values > In addition: Warning messages: > 1: In .cbind.ts(list(...), .makeNamesTs(...), dframe = dframe, union = > FALSE) : >non-intersecting series > 2: In min(x) : no non-missing arguments to min; returning Inf > 3: In max(x) : no non-missing arguments to max; returning -Inf > 4: In min(x) : no non-missing arguments to min; returning Inf > 5: In max(x) : no non-missing arguments to max; returning -Inf > > plot(y1, XY) > Error in plot(y1, XY) : object 'y1' not found > > with(XY, plot(y1)) > > sessionInfo() > R version 3.2.3 (2015-12-10) > Platform: x86_64-apple-darwin13.4.0 (64-bit) > Running under: OS X 10.11.2 (El Capitan) > > locale: > [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > loaded via a namespace (and not attached): > [1] tools_3.2.3 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problem building R-3.2.4
This was reported as a bug earlier today and has been fixed in R- patched: https://bugs.r-project.org/bugzilla/show_bug.cgi?id=16755 Martyn On Thu, 2016-03-10 at 08:51 -0800, Mick Jordan wrote: > I am trying to build R-3.2.4 on an Oracle Enterprise Linux system, > where > I have previously built R-3.1.3 and predecessors without problems. I > ran > "./configure --with-x=no" ok. The make fails in src/extra/xz with > what > looks like a Makefile problem: > > liblzma.a: $(liblzma_a_OBJECTS) > $rm -f $@ > $(AR) -cr $@ $(liblzma_a_OBJECTS) > $(RANLIB) $@ > > > What I see in the make log is: > > gcc -std=gnu99 -I./api -I. -I../../../src/include > -I../../../src/include > -I/usr/local/include -DHAVE_CONFIG_H -fopenmp -g -O2 -c x86.c -o > x86.o > m -f liblzma.a > make[4]: m: Command not found > make[4]: *** [liblzma.a] Error 127 > make[4]: Leaving directory `/tmp/R-3.2.4/src/extra/xz' > make[3]: *** [R] Error 2 > make[3]: Leaving directory `/tmp/R-3.2.4/src/extra/xz' > make[2]: *** [make.xz] Error 2 > make[2]: Leaving directory `/tmp/R-3.2.4/src/extra' > make[1]: *** [R] Error 1 > make[1]: Leaving directory `/tmp/R-3.2.4/src' > make: *** [R] Error 1 > > I'm very suspicious of the "$rm -f @a" line, which also appears in > the > Makefile.in. Seems like $r has resolved to empty leading to the > command > "m -f liblzma.a" > > Mick Jordan > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel--- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Assignment operator and deep copy for calling C functions
This is discussed in the "Writing R Extensions" manual section 5.9.10: Named objects and copying. .Call does not copy its arguments and it is not safe to modify them, as you have found, since multiple symbols may refer to the same object. If you are going to modify an argument to .Call you should take a deep copy with "duplicate" first, and your function should return the modified copy. It is probably also worth mentioning Rcpp, which provides a friendlier interface to R than the C API. In Rcpp you use clone() to force a deep copy. Martyn On Tue, 2016-04-05 at 15:38 +, Thorsten R wrote: > Hi All, > > i have a problem in understanding what the assignment operator '<- > ' really is doing. > If i create two numeric arrays in R and copy one into the other with > '<-' and > afterwards change one array by calling a C function, both arrays are > changed! > > The problem I am facing can easily be seen in the following example: > (The following R code and the C function is attached and the C > function can be compiled with R CMD SHLIB test.c.) > > First include the c function: > dyn.load("test.so") > > Let's start with 2 arrays: > a <- rep(0, 5) > b <- rep(1, 5) > > Now print the memory addresses: > print(sprintf("a: %sb: %s", tracemem(a), tracemem(b) )) > [1] "a: <0x29d34e0>b: <0x29946e0>" > > oky, they are different! Now copy a into b and print again the > addresses: > b <- a > print(sprintf("a: %sb: %s", tracemem(a), tracemem(b) )) > [1] "a: <0x29d34e0>b: <0x29d34e0>" > > Ugh they are the same. If I now call my C function, which writes > 0,1,2,3,4 into an array of 5 doubles, > of course 'both' arrays are changed: > .Call("test", b) > print( cbind(a,b) ) > a b > [1,] 0 0 > [2,] 1 1 > [3,] 2 2 > [4,] 3 3 > [5,] 4 4 > > > If i just change one element of b instead of calling the c function, > then a full copy of b is made: > a <- rep(0, 5) > b <- rep(1, 5) > print(sprintf("a: %sb: %s", tracemem(a), tracemem(b) )) > [1] "a: <0x2994b58>b: <0x2912ff8>" > > b <- a > print(sprintf("a: %sb: %s", tracemem(a), tracemem(b) )) > [1] "a: <0x2994b58>b: <0x2994b58>" > > b[1] <- 5 > print(sprintf("a: %sb: %s", tracemem(a), tracemem(b) )) > "a: <0x2994b58>b: <0x29134d8>" > > print( cbind(a,b) ) > a b > [1,] 0 5 > [2,] 0 0 > [3,] 0 0 > [4,] 0 0 > [5,] 0 0 > > > So what is happening here? What is the 'right' way to ensure a deep > copy before using Call? > I am currently using a for loop and copy every single element. > > Thanks! > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel--- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] can't build from source: error: template with C linkage
This looks like the result of including a C++ system header inside an extern "C" block. There is no evidence of this happening in the current version 2.22.1. However, it did happen in the previous version 2.22 via the chain of inclusions: MCMCglmmcc.h -> cs.h -> R.h -> various C++ system headers See Writing R Extensions P 108. I would check that the people reporting this bug are using the latest version. If not then you have already fixed this. Martyn On Fri, 2016-08-19 at 06:25 -0500, Dirk Eddelbuettel wrote: > Jarrod, > > On 19 August 2016 at 04:43, Jarrod Hadfield wrote: > > > > Hi All, > > > > Users have contacted me because they can not build MCMCglmm from > > source. All are using R 3.3.0 on various machines with different > > compilers > > > > gcc (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 > > g++ (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4 > > Mac OS X El Capitan (version/compiler unspecified) > > > > The issue seems to be with mixing C/C++ with the repeated error: > > > > /usr/include/c++/5/bits/cpp_type_traits.h:118:3: error: template > > with C linkage > > template > > > > I see a bug report has been filed for the CRAN package tgp that was > > experiencing similar problems, but it is not clear whether it has > > been resolved. > > > > Any help would be greatly appreciated. > > I am on Ubuntu 16.04 with gcc/g++ 5.4.0 and I _cannot_ reproduce > this. > > Your package installs fine [1]. > > Dirk > > > [1] For some definition of 'fine' which overlooks pages of compiler > _warnings_. > Still no errors. A log is below. Note that I a) had to turn off '- > Wall > -pedantic' which I normally use, and add two explicit 'do not warn' > switches. The rest is stock Ubuntu behaviour. > > edd@max:/tmp/mcmcglmm$ R_LIBS_USER=/tmp/RcppDepends/lib R CMD INSTALL > MCMCglmm_2.22.1.tar.gz > R_LIBS_USER=/tmp/RcppDepends/lib R CMD INSTALL > MCMCglmm_2.22.1.tar.gz > * installing to library ‘/tmp/RcppDepends/lib’ > * installing *source* package ‘MCMCglmm’ ... > ** package ‘MCMCglmm’ successfully unpacked and MD5 sums checked > ** libs > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_add.c -o cs_add.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_addR.c -o cs_addR.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_cbind.c -o cs_cbind.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_chol.c -o cs_chol.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_cholsol.c -o cs_cholsol.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_amd.c -o cs_amd.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_compress.c -o cs_compress.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_cov2cor.c -o cs_cov2cor.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -std=gnu99 -Wno-maybe-uninitialized -Wno-unused-but-set- > variable -c cs_counts.c -o cs_counts.o > gcc -I/usr/share/R/include -DNDEBUG -fpic -g -O2 -fstack- > protector-strong -Wformat -Werror=format-security -Wdate-time > -D_FORTIFY_SOURCE=2 -g -O3 -Wall -pipe -pedantic -std=gnu99 -O3 > -pipe -s
Re: [Rd] Support for signing R packages with GPG
Thanks Jeroen. The R Foundation has recently formed a working group to look into package authentication. There are basically two models. One is the GPG based model you describe; the other is to use X.509 as implemented in the PKI package. It's not yet clear which way to go but we are thinking about it. Martyn On Sun, 2016-10-23 at 18:37 +0200, Jeroen Ooms wrote: > I would like to propose adding experimental support for including a > PGP signature in R source packages. This would make it possible to > verify the identity of the package author and integrity of the > package > sources. > > There are two ways to implement this. Assuming GnuPG is on the PATH, > the CMD build script could call: > > gpg --clearsign MD5 -o MD5.gpg > > Alternatively the 'gpg' R package provides a more portable method via > the gpgme C library. This method works on Windows / macOS as well. > > writeLines(gpg::gpg_sign("MD5"), "MD5.gpg") > > Attached is an example implementation of the latter (also available > at > https://git.io/vPb9G) which has been tested with several versions of > GnuPG. It exposes an optional flag for CMD build, i.e: > > R CMD build somepkg --sign > R CMD build somepkg --sign=jeroen.o...@stat.ucla.edu > > The --sign flag creates a signature for the MD5 file [1] in the > source > package and saves it as MD5.gpg (similar to a Debian 'Release.gpg' > file [2]). Obviously the package author or build server needs to have > a suitable private key in the local keyring. > > > ## Signature verification > > Once R supports signed packages, we can develop a system to take > advantage of such signatures. The verification itself can easily be > implemented via 'gpg --verify' or via gpg::gpg_verify() and could be > performed without changes in R itself. The difficult part in GPG > comes > from defining which peers should be trusted. > > But even without a 'web of trust' there are several ways one can > immediately take advantage of signatures. For example, when a > installing a package update or dev-version of a package, we can > verify > that the signature of the update matches that of the currently > installed package. This would prevent the type of attacks where an > intermediate party pushes a fake malicious update for a popular R > package via e.g. a hacked CRAN mirror. > > Eventually, CRAN could consider allowing signatures as a secure > alternative to confirmation emails, and signing packages on the build > servers with a CRAN GPG key, similar to Debian repositories. For now, > at least establishing a format for (optionally) signing packages > would > be a great first step. > > > [1] Eventually we should add SHA256 and SHA256.sig in addition to MD5 > [2] https://cran.r-project.org/web/packages/gpg/vignettes/intro.html# > debian_example > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel--- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] strptime("1","%m") returns NA
Hi Frederik, On Mon, 2017-01-16 at 18:20 -0800, frede...@ofb.net wrote: > Hi R Devel, > > I wrote some code which depends on 'strptime' being able to parse an > incomplete date, like this: > > > > > base::strptime("2016","%Y") > [1] "2016-01-14 PST" > > The above works - although it's odd that it gives the month and day > for Sys.time(). I might expect it to set them both to zero as the GNU > libc strptime does on my system, or to use January 1 which would also > be reasonable. From the help page for strptime: "For ‘strptime’ the input string need not specify the date completely: it is assumed that unspecified seconds, minutes or hours are zero, and an unspecified year, month or day is the current one." > When I specify the month, however, I get NA: > > > > > base::strptime("2016-12","%Y-%m") > [1] NA > > > > base::strptime("1", "%m") > [1] NA > > Any reason for this to be the case? Also from the help page: "(However, if a month is specified, the day of that month has to be specified by ‘%d’ or ‘%e’ since the current day of the month need not be valid for the specified month.)" If strptime("2016-2", "%Y-%m") just filled in the current day then it would give valid output when called on the 1st to the 28th of each month, but would give either invalid output or fail when called on the 29th to the 31st of any month. This would be a nightmare to debug. The current behaviour lets you know there is a logical problem with your input. > I reported a bug here: > > https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=17212 > > but I don't think I'm getting emails from Bugzilla so maybe best to > ping me if anyone replies there instead. See the general guidance on submitting bug reports: "Code doing something unexpected is not necessarily a bug - make sure to carefully review the documentation for the function you are calling to see if the behaviour it exhibits is what it was designed to do, even if it’s not what you want." https://www.r-project.org/bugs.html Martyn > I've just written a simple reimplementation of 'strptime' for my own > use; I hope this bug report may be useful to others. > > Thank you, > > Frederick > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check error
On Wed, 2017-02-08 at 15:51 -0600, Therneau, Terry M., Ph.D. wrote: > I have a local library which depends on the expm library. The expm library > is loaded into > my personal space and I have the environment variable R_LIBS_USER set > appropriately. The > command "library(expm)" works just fine from the command line, and in fact > the package > works if I do the source() and dyn.load() commands by hand. > > The following sequence works: > > tmt% R CMD build --no-build-vignettes hmm > tmt% R CMD INSTALL hmm*gz > tmt% R > > library(hmm) > > run some commands from the hmm library > > But "R CMD check hmm.gz" fails with > ERROR: dependency ‘expm’ is not available for package ‘hmm’ > * removing > ‘/people/biostat2/therneau/consult/alzheimer/hmm.Rcheck/hmm’ > > The R CMD build command fails similarly if I let it try to build the > vignettes. > > What's up? If you are setting the environment variable R_LIBS_USER in R_HOME/site/Renviron.site or in .Renviron then this file will not be read when you run R CMD check or R CMD build, as R is then run with -- vanilla which implies --no-environ. You also need to set it in these files: ~/.R/build.Renviron ~/.R/check.Renviron See R-exts section 1.3 and ?Startup. Martyn > Terry T > > > > sessionInfo() > R version 3.3.1 (2016-06-21) > Platform: x86_64-pc-linux-gnu (64-bit) > Running under: CentOS release 6.8 (Final) > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check error
On Thu, 2017-02-09 at 09:52 -0600, Therneau, Terry M., Ph.D. wrote: > Martin, > I am aware of --vanilla; I use it myself for some testing. In this case > R_LIBS_USER was > set externally (part of my login) and does not involve any of the R scripts. > That means > it is inherited by any subprocess. For example: > > tmt1495% R --vanilla --no-environ > > R version 3.3.1 (2016-06-21) -- "Bug in Your Hair" > Copyright (C) 2016 The R Foundation for Statistical Computing > Platform: x86_64-pc-linux-gnu (64-bit) > > > > > system("printenv | grep R_LIBS") > R_LIBS_SITE= > R_LIBS_USER=/people/biostat2/therneau/Rlib > > So, per the manual R CMD check inherits the path. The question is > why does it ignore it? Hmmm. Perhaps it is being overwritten. Does this work? $ export R_CHECK_ENVIRON= $ R CMD check hmm.gz Martyn > Terry T. > > > On 02/09/2017 02:54 AM, Martyn Plummer wrote: > > > > On Wed, 2017-02-08 at 15:51 -0600, Therneau, Terry M., Ph.D. wrote: > > > > > > I have a local library which depends on the expm library. The expm > > > library is loaded into > > > my personal space and I have the environment variable R_LIBS_USER set > > > appropriately. The > > > command "library(expm)" works just fine from the command line, and in > > > fact the package > > > works if I do the source() and dyn.load() commands by hand. > > > > > > The following sequence works: > > > > > > tmt% R CMD build --no-build-vignettes hmm > > > tmt% R CMD INSTALL hmm*gz > > > tmt% R > > > > library(hmm) > > > > run some commands from the hmm library > > > > > > But "R CMD check hmm.gz" fails with > > > ERROR: dependency ‘expm’ is not available for package ‘hmm’ > > > * removing > > > ‘/people/biostat2/therneau/consult/alzheimer/hmm.Rcheck/hmm’ > > > > > > The R CMD build command fails similarly if I let it try to build the > > > vignettes. > > > > > > What's up? > > > > If you are setting the environment variable R_LIBS_USER in > > R_HOME/site/Renviron.site or in .Renviron then this file will not be > > read when you run R CMD check or R CMD build, as R is then run with -- > > vanilla which implies --no-environ. > > > > You also need to set it in these files: > > > > ~/.R/build.Renviron > > ~/.R/check.Renviron > > > > See R-exts section 1.3 and ?Startup. > > > > Martyn > > > > > > > > Terry T > > > > > > > > > > sessionInfo() > > > R version 3.3.1 (2016-06-21) > > > Platform: x86_64-pc-linux-gnu (64-bit) > > > Running under: CentOS release 6.8 (Final) > > > > > > __ > > > R-devel@r-project.org mailing list > > > https://stat.ethz.ch/mailman/listinfo/r-devel > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Platform dependent native routine registration
On Tue, 2017-03-07 at 14:57 +, Gábor Csárdi wrote: > On Tue, Mar 7, 2017 at 2:51 PM, Dirk Eddelbuettel > wrote: > [...] > > > But I just found that using string literals in .Call() works just > > > fine. Hopefully > > > this will still be allowed in the long run: > > > > > > .Call("c_non_existent_function_on_this_platform", ...) > > > > So you are adjusting the literals on the fly at compilation time? > > No, I just leave them there. They are not supposed to be called on a platform > where the C function does not exist, and even if they would be, that's just an > error, which is fine. > > I could dynamically include/exclude R code at install time, but that is not so > easy, either, I would probably need to deal with the docs as well, etc. > > So I'll just leave it there You can put platform-specific R code and documentation in a subdirectory named "unix" or "windows". But there is no provision for MacOS-specific R code as far as I know. Martyn > G. > > > Dirk > > > > -- > > http://dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Experimental CXX_STD problem in R 3.4
C++ support across different platforms is now very heterogeneous. The standard is evolving rapidly but there are also platforms in current use that do not support the recent iterations of the standard. Our goal for R 3.4.0 is to give as much flexibility as possible. The default compiler is whatever you get "out of the box" without setting the "-std=" flag. This means different things on different platforms. If you need a specific standard there are various ways to request one, as described in the R-exts manual. On unix-alikes, the capabilities of the compiler are tested at configure time and appropriate flags chosen to support each standard. On Windows, the capabilities are hard-coded and correspond to the current version of Rtools, i.e. only C++98 and C++11 are currently supported. C++17 support is experimental and was added very recently. Clang 4.0.0, which was released last week, passes the configuration tests for C++17, and so does gcc 7.0.1, the pre-release version of gcc 7.1.0 which is due out later this year. The tests for C++17 features are, however, incomplete. I have just added some code to ensure that the compilation fails with an informative error message if a specific C++ standard is requested but the corresponding compiler has not been defined. Please test this. Martyn From: R-devel on behalf of Dirk Eddelbuettel Sent: 18 March 2017 15:55 To: Jeroen Ooms Cc: r-devel Subject: Re: [Rd] Experimental CXX_STD problem in R 3.4 On 18 March 2017 at 14:21, Jeroen Ooms wrote: | R 3.4 has 'experimental' support for setting CXX_STD to CXX98 / CXX11 | / CXX14 / CXX17. R 3.1.0 introduced CXX11 support. R 3.4.0 will have CXX14 support. So I would only refer to the CXX17 part as experimental. | However on most platforms, the R configuration seems to leave the | CXX1Y and CXX1Z fields blank in "${R_HOME}/etc/Makeconf" (rather than | falling back on default CXX). Therefore specifying e.g CXX_STD= CXX14 | will fail build with cryptic errors (due to compiling with CXX="") That depends of course on the compiler found on the system. On my box (with g++ being g++-6.2 which _defaults_ to C++14) all is well up to CXX1Y. But I also have CXX1Z empty. | I don't think this is intended? Some examples from r-devel on Windows: | | CXX11: https://win-builder.r-project.org/R8gg703OQSq5/ | CXX98: https://win-builder.r-project.org/mpVfXxk79FaN/ | CXX14: https://win-builder.r-project.org/L3BSMgAk4cQ7/ | CXX17: https://win-builder.r-project.org/3ETZXrgkg77I/ You can't expect CXX14 and CXX17 to work with the only available compiler there, g++-4.9.3. | Similar problems appear on Linux. I think the problem is that Makeconf | contains e.g: | | CXX1Z = | CXX1ZFLAGS = | CXX1ZPICFLAGS = | CXX1ZSTD = | | When CXX_STD contains any other unsupported value (e.g. CXX24) R | simply falls back on the default CXX configuration. The same should | probably happen for e.g. CXX17 when CXX1Z is unset in Makeconf? Probably. Dirk -- http://dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Experimental CXX_STD problem in R 3.4
On Mon, 2017-03-20 at 16:38 +0100, Jeroen Ooms wrote: > On Sun, Mar 19, 2017 at 9:09 PM, Martyn Plummer > wrote: > > I have just added some code to ensure that the compilation fails > > with an informative error message if a specific C++ standard is > > requested but the corresponding compiler has not been defined. > > Please test this. > > Are you sure we shouldn't just fall back on a previous standard > instead of failing? For example if the package author has specified a > preference for CXX14 but the compiler only has CXX11, the package > might still build with -std=c++11 (given that C++14 is only a small > extension on the C++11 standard). > > The current behavior (in R 3.3) for packages with "CXX_STD=CXX11" is > to fall back on CXX when the compiler does not have CXX1X. I don't think that is true. > Will R-3.4 > start failing these packages? This would affect many users on CentOS 6 > (gcc 4.4.7). The major issue with long-term support platforms like CentOS is that the compiler is rather old. According to the GCC web site, 4.4.7 has partial support for C++11 via the -std=c++0x flag ( https://gcc.gnu.org /projects/cxx-status.html#cxx11 ). The problem is that the tests for C++11 compliance used by R's configure script have become much more stringent. If g++ 4.4.7 passed before, it is unlikely to pass now. This is an issue that I discussed here. https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17189 This creates a regression on older platforms. Some packages that used only a few C++11 features used to compile correctly but now don't because the compiler is no longer recognized as conforming to the C++11 standard (and to be fair it never did but the previous tests were weaker). What I suggest is that on these platforms you do a post-install patch of etc/Makeconf and set the variables for the C++11 compiler manually (CXX11, CXX11FLAGS, CXX11PICFLAGS, CXX11STD, SHLIB_CXX11LD, SHLIB_CXX11LDFLAGS). Martyn __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.4 has broken C++11 support
A user with the email address flying-sh...@web.de has submitted a bug report on this topic. https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17260 Assuming that you are the same person, I will address the issue here first. If you get the message “C++11 standard requested but CXX11 is not defined” then this means that there is no available C++11 compiler on your computer. The presence or absence of a working C++11 compiler is determined at configure time when R is built. The tests used by R's configure script to determine C++11 support are more stringent in R 3.4.0 than in previous versions. However, if you are using gcc you should be protected against this change. For versions of gcc prior to 4.8 (which have only partial C++11 support) the same tests as in R 3.3.x are used. This maintains the current behaviour on long-term service Linux distributions that are stuck with old gcc versions. It would be help if you could share more information about your platform - i.e. the output of sessionInfo() - and specify the compiler version you are using. Martyn On Tue, 2017-04-18 at 15:11 +0200, Angerer, Philipp via R-devel wrote: > > Hi, > > This commit (I’m using the mirror to have a working link) broke C++11 > compilation. > > Before (and still now, according to the comments in the configure > script), it’s sufficient to just have “SystemRequirements: C++11” in > the DESCRIPTION file. > > > But now “R CMD install” fails with “C++11 standard requested but > CXX11 is not defined”, which is, according to the documentation , a > lie. > > I can’t even circumvent this, as setting “CXX11=$(CXX)” in the > src/Makevars file fails with “CXX definition recursive”, and > hardcoding “CXX11=g++” is a bad idea. > > Did I do sth. wrong or is the C++11 support in R just broken atm.? > > > Best, Philipp > > PS: After addressing all points in the submission of my popular > package “IRkernel”, I didn’t get any feedback, and the file just > vanished from the incoming directory in CRAN. I asked about it > multiple times but got no answer. What can I do now? > > > > > Helmholtz Zentrum Muenchen > > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > > Ingolstaedter Landstr. 1 > > 85764 Neuherberg > > www.helmholtz-muenchen.de > > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. > Alfons Enhsen > > Registergericht: Amtsgericht Muenchen HRB 6466 > > USt-IdNr: DE 129521671 > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.4 has broken C++11 support
On Wed, 2017-04-19 at 12:42 +0200, Angerer, Philipp wrote: > Hi! > > Well, my linux distribution has very recent versions > of everything, so a working C++11 compiler exists: > > $ gcc --version | head -n1 > gcc (GCC) 6.3.1 20170306 I am on Fedora 25 which also uses gcc 6.3.1. The default standard for 6.3.1 is C++14. The output from configure should contain these lines: Default C++ compiler: g++ C++98 compiler:g++ -std=gnu++98 C++11 compiler:g++ -std=gnu++11 C++14 compiler:g++ C++17 compiler: Please check. Martyn > Could wrong ./configure options be at fault here? See: > > https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=r-devel#n40 > > My sessionInfo(): > > $ R-devel --slave -e 'sessionInfo()' | head -n3 > R Under development (unstable) (2017-04-18 r72542) > Platform: x86_64-pc-linux-gnu (64-bit) > Running under: Arch Linux > > Thanks, Philipp > > > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. > Alfons Enhsen > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.4 has broken C++11 support
On Wed, 2017-04-19 at 13:17 +0200, Angerer, Philipp wrote: > Hmm, doesn’t look like my R was configured incorrectly: That looks fine. Can you please give a reproducible example of a package that compiles correctly on R 3.3.3 but not with R 3.4.0 or R- devel. Martyn > > > R is now configured for x86_64-pc-linux-gnu > > Source directory: . > Installation directory:/opt/r-devel > > C compiler:gcc -march=x86-64 -mtune=generic -O2 -pipe > -fstack-protector-strong --param=ssp-buffer-size=4 > Fortran 77 compiler: gfortran -g -O2 > > Default C++ compiler: g++ -march=x86-64 -mtune=generic -O2 -pipe > -fstack-protector-strong --param=ssp-buffer-size=4 > C++98 compiler:g++ -std=gnu++98 -march=x86-64 -mtune=generic > -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 > C++11 compiler:g++ -std=gnu++11 -march=x86-64 -mtune=generic > -O2 -pipe -fstack-protector-strong --param=ssp-buffer-size=4 > C++14 compiler:g++ -march=x86-64 -mtune=generic -O2 -pipe > -fstack-protector-strong --param=ssp-buffer-size=4 > C++17 compiler: > Fortran 90/95 compiler:gfortran -g -O2 > Obj-C compiler: > > Interfaces supported: X11, tcltk > External libraries:readline, BLAS(generic), LAPACK(generic), curl > Additional capabilities: PNG, JPEG, TIFF, NLS, cairo, ICU > Options enabled: shared R library, R profiling > > Capabilities skipped: > Options not enabled: shared BLAS, memory profiling > > Recommended packages: yes > > - Ursprüngliche Mail - > Von: "Angerer, Philipp" > An: "Martyn Plummer" > CC: "r-devel" > Gesendet: Mittwoch, 19. April 2017 12:42:33 > Betreff: Re: [Rd] R 3.4 has broken C++11 support > > Hi! > > Well, my linux distribution has very recent versions > of everything, so a working C++11 compiler exists: > > $ gcc --version | head -n1 > gcc (GCC) 6.3.1 20170306 > > Could wrong ./configure options be at fault here? See: > > https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=r-devel#n40 > > My sessionInfo(): > > $ R-devel --slave -e 'sessionInfo()' | head -n3 > R Under development (unstable) (2017-04-18 r72542) > Platform: x86_64-pc-linux-gnu (64-bit) > Running under: Arch Linux > > Thanks, Philipp > > > > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. > Alfons Enhsen > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.4 has broken C++11 support
On Wed, 2017-04-19 at 16:32 +0200, Angerer, Philipp wrote: > Hi Dirk and Martyn, > > > That looks fine. Can you please give a reproducible example of a > > package > > that compiles correctly on R 3.3.3 but not with R 3.4.0 or R-devel. > > here you go, it’s pretty much the simplest package possible that > needs C++11: > > https://github.com/flying-sheep/cxx11test This works for me (See below). Make sure you are not overwriting some key variables in a personal Makevars file or a site-wide Makevars.site file. [plummerm@D-160182 temp]$ ~/R-devel/r-devel/build/bin/R CMD INSTALL cxx11test_1.0.tar.gz * installing to library ‘/home/plummerm/R-devel/r-devel/build/library’ * installing *source* package ‘cxx11test’ ... ** libs g++ -std=gnu++11 -I/home/plummerm/R-devel/r-devel/build/include -DNDEBUG -I"/home/plummerm/R-devel/r-devel/build/library/Rcpp/include" -I/usr/local/include -fpic -g -O2 -c RcppExports.cpp -o RcppExports.o g++ -std=gnu++11 -I/home/plummerm/R-devel/r-devel/build/include -DNDEBUG -I"/home/plummerm/R-devel/r-devel/build/library/Rcpp/include" -I/usr/local/include -fpic -g -O2 -c test.cpp -o test.o g++ -std=gnu++11 -shared -L/usr/local/lib64 -o cxx11test.so RcppExports.o test.o installing to /home/plummerm/R-devel/r-devel/build/library/cxx11test/libs ** R ** preparing package for lazy loading ** help No man pages found in package ‘cxx11test’ *** installing help indices ** building package indices ** testing if installed package can be loaded * DONE (cxx11test) Martyn > > Maybe you can share with us how you configure the build of R-devel? > > Sure, in the mail you quoted, I already linked exactly that: > > https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=r-devel#n40 > > > ./configure --prefix=/opt/r-devel \ > > --libdir=/opt/r-devel/lib \ > > --sysconfdir=/etc/R-devel \ > > --datarootdir=/opt/r-devel/share \ > > rsharedir=/opt/r-devel/share/R/ \ > > rincludedir=/opt/r-devel/include/R/ \ > > rdocdir=/opt/r-devel/share/doc/R/ \ > > --with-x \ > > --enable-R-shlib \ > > --with-lapack \ > > --with-blas \ > > F77=gfortran \ > > LIBnn=lib > > > Thanks and cheers, > Philipp > > > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. > Alfons Enhsen > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Default R-3.4.0 RC CXXFLAGS without -O2 on x86_64-linux-gnu with g++-5.4.0 causes WARNING from stl_list.h
This is fixed in R-rc_2017-04-19_r72555.tar.gz If you are affected by this issue then please test the RC tarball. This is the last chance to detect problems (including those created by the last-minute patch) before the release of R 3.4.0. Martyn On Wed, 2017-04-19 at 12:19 +, Neumann, Steffen wrote: > Hi r-devel, > > a recent install of R-3.4.0 RC (2017-04-13 r72510) > on Linux (Ubuntu 16.04.1 LTS) x86_64-linux-gnu > with g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609 > (see http://bioconductor.org/checkResults/devel/bioc-LATEST/malbec2-N > odeInfo.html for more) > results in CXXFLAGS not containing "-O2" as optimisation flag, > there is only " -Wall", while CFLAGS are happy with "-g -O2 > -Wall" > > This has an influence in at least one place > https://stat.ethz.ch/pipermail/bioc-devel/2017-April/010733.html > where we have WARNINGS in R CMD check from "Found ‘abort’, > possibly from ‘abort’ (C)" in packages xcms/mzR. > The abort() call is not coming from XCMS, but rather > from the C++ code in the STL: > > [...] > # 1770 "/usr/include/c++/5/bits/stl_list.h" > void _M_check_equal_allocators(list& __x) { > if (_M_get_Node_allocator()) > __builtin_abort(); > } > > If we compile with -O2 optimisation, this getting rid of > the abort() symbol, as shown > in https://github.com/sneumann/xcms/issues/150#issuecomment-293545521 > > Martin Morgan created a minimum example that shows that > the symbol is indeed deep down in the STL (see below and in: > https://stat.ethz.ch/pipermail/bioc-devel/2017-April/010837.html ) > > This raises several questions: > > 1) is there any way to avoid the WARNING / abort() inside > the STL list implementation ? Or just live with it ? > > 2) If not, is there a reason why the Bioconductor build farm > Ubuntu machine is not using -O2 as default CXXFLAG ? > BioC admins are trying to have a vanilla R installation with > defaults. > According to Herve Pages, CXXFLAGS without -O2 is default > since R-3.4 beta, but I don't know enough about the package > build logic to point to a particular R commit. > > 3) I thought about cheating the system and add -O2 > in the package CXXFLAGS, but Martin Morgan > recommends packages shouldn't mess and override system build > defaults > to mask and paper over the actual issue having a nasty abort() > lurking somewhere. > But I couldn't add PKG_CXXFLAGS=-O2 in first place, since that > triggers the different WARNING that -O2 is not portable. > > => Any help and input would be highly appreciated. > > Thanks in advance, > yours, > Steffen > > > tmp.cpp by Martin Morgan (also in above linked mail thread) > --- > #include > > int foo(int argc, const char *argv[]) { > std::list l1, l2; > std::list::iterator it; > > it = l1.begin(); > l1.splice (it, l2); // mylist1: 1 10 20 30 2 3 4 > > return 0; > } > --- > > Test with > > rm -f tmp.o && R CMD SHLIB tmp.cpp && nm tmp.o | grep abort > > with compiler settings in ~/.R/Makevars with/without -O2 > - > CXXFLAGS = -g -O0 > - > > > > > > -- > IPB HalleAG Massenspektrometrie & > Bioinformatik > Dr. Steffen Neumann http://www.IPB-Halle.DE > Weinberg 3 Tel. +49 (0) 345 5582 - 1470 > 06120 Halle +49 (0) 345 5582 - 0 > sneumann(at)IPB-Halle.DE Fax. +49 (0) 345 5582 - 1409 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] prefixed zlib and bzip2 headers
If you are having any trouble compiling R on RHEL or its derivatives, it is worth recalling that a binary distribution of R is provided through the EPEL (Extra Packages for Enterprise Linux) repository: https://fedoraproject.org/wiki/EPEL Install the appropriate epel-release RPM to enable the repository. Then you can install R via dnf as you would any other software package. Tom Callaway from Red Hat maintains the R rpms and he ensures that they still build and install on RHEL 5 and 6. Specifically, for zlib and other system libraries that are too old on these platforms, up-to-date versions are built and statically linked into R during the RPM build process. Having said that, if you need to install R from source on RHEL 5 or 6 then you need to specify the locations of the locally-installed libraries and headers. This is done at configure time via CFLAGS and LDFLAGS (See the R-admin manual, section B3.3), e.g. CFLAGS="-g -O2 -I/path/to/my/headers" \ LDFLAGS="-L/path/to/my/libs" \ ./configure If you do this then you do not need to set LD_LIBRARY_PATH at runtime. Library locations specified via LDFLAGS are collated and automatically added to LD_LIBRARY_PATH (See R-admin, section B7). Martyn On Thu, 2017-04-27 at 14:41 +, Jones, Michael wrote: > Hello, > > I'm trying to compile R-3.3.3 or R-3.4.0 on a RHEL6 system that I've > prefixed the latest headers down a shared utility path, I've sourced > this path in LD_LIBRARY_LATH, R_LD_LIBRARY_PATH, and dropping the > headers down /src/include which appears to be the default > R_include_dir but it will not accept these any other place than > /usr/include. Is there a way to properly define a prefixed > includedir and libdir for R that I'm missing? I was able to do this > in 3.2.4-revised successfully. I see the comments that -with-system- > zlib is now default, but I see no options to override that. Can you > please point me in the correct direction? > > > Many thanks! > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug report: nlme model-fitting crashes with R 3.4.0
On Thu, 2017-05-11 at 06:37 -0500, Dirk Eddelbuettel wrote: > On 11 May 2017 at 10:17, Erwan Le Pennec wrote: > > Dear all, > > > > I've stumbled a similar issue with the package cluster when > > compiling the 3.4.0 version with the settings of Fedora RPM specs. > > Compiling R with the default setting of configure yields a version > > that > > works for cluster... and nlme. > > > > I did not find the exact option that was the cause of this > > issue > > but I'm willing to help. > > > > Erwan > > > > PS: This is the reason why R is still at version 3.3.3 on the > > Fedora > > distribution. > > > > On 10/05/17 22:59, Langbehn, Douglas wrote: > > > lme() and gls() models from the nlme package are all crashing > > > with R.3.4.0. Identical code ran correctly, without error in R > > > 3.3.3 and earlier versions. The behavior is easily demonstrated > > > using one of the examples form the lme() help file, along with > > > two simple variants. I have commented the errors generated by > > > these calls, as well as the lines of code generating them, in the > > > code example below. > > > > > > As of today, this bug had not been reported on the R Bugzilla > > > page. I could not submit this report directly to the page because > > > I am not a member, and , as explained in the "Reporting Bugs" > > > link from the R home page, membership has now been closed due to > > > spamming problems.. > > > > > > # > > > ### > > > library(nlme) > > > #Using version 3.1-131 > > > #Windows 7 64-bit operating system > > > > > > fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1) > > > > > > # Error in array(c(rep(1, p), .C(inner_perc_table, as.double(X), > > > as.integer(unlist(grps)), : > > > # object 'inner_perc_table' not found > > That is a known issue with R 3.4.0 -- see NEWS. > > Packages using .C and .Fortran _must_ be recompiled for R 3.4.0. If > and when > you do, the example will work again. > > Dirk However, the issue raised by Erwan on Fedora is a real bug which affects at least two recommended packages. I know the cause of the problem and am trying to find out how many packages are affected. Martyn __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] bug report: nlme model-fitting crashes with R 3.4.0
On Thu, 2017-05-11 at 12:23 +, Martyn Plummer wrote: > On Thu, 2017-05-11 at 06:37 -0500, Dirk Eddelbuettel wrote: > > On 11 May 2017 at 10:17, Erwan Le Pennec wrote: > > > Dear all, > > > > > > I've stumbled a similar issue with the package cluster when > > > compiling the 3.4.0 version with the settings of Fedora RPM > > > specs. > > > Compiling R with the default setting of configure yields a > > > version > > > that > > > works for cluster... and nlme. > > > > > > I did not find the exact option that was the cause of this > > > issue > > > but I'm willing to help. > > > > > > Erwan > > > > > > PS: This is the reason why R is still at version 3.3.3 on the > > > Fedora > > > distribution. > > > > > > On 10/05/17 22:59, Langbehn, Douglas wrote: > > > > lme() and gls() models from the nlme package are all crashing > > > > with R.3.4.0. Identical code ran correctly, without error in R > > > > 3.3.3 and earlier versions. The behavior is easily > > > > demonstrated > > > > using one of the examples form the lme() help file, along with > > > > two simple variants. I have commented the errors generated by > > > > these calls, as well as the lines of code generating them, in > > > > the > > > > code example below. > > > > > > > > As of today, this bug had not been reported on the R Bugzilla > > > > page. I could not submit this report directly to the page > > > > because > > > > I am not a member, and , as explained in the "Reporting Bugs" > > > > link from the R home page, membership has now been closed due > > > > to > > > > spamming problems.. > > > > > > > > ### > > > > ## > > > > ### > > > > library(nlme) > > > > #Using version 3.1-131 > > > > #Windows 7 64-bit operating system > > > > > > > > fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ > > > > 1) > > > > > > > > # Error in array(c(rep(1, p), .C(inner_perc_table, > > > > as.double(X), > > > > as.integer(unlist(grps)), : > > > > # object 'inner_perc_table' not found > > > > That is a known issue with R 3.4.0 -- see NEWS. > > > > Packages using .C and .Fortran _must_ be recompiled for R 3.4.0. If > > and when > > you do, the example will work again. > > > > Dirk > > However, the issue raised by Erwan on Fedora is a real bug which > affects at least two recommended packages. I know the cause of the > problem and am trying to find out how many packages are affected. Sorry for the false alarm. Dirk is right and the problem is the binary incompatibility between 3.4.0 and 3.3.3. If you try building the R source RPM with R 3.4.0 *without using a chroot* then you get interference from the installed version of R (i.e. 3.3.3) when the %check section of the spec file is run. This is not a bug in the spec file because you are not supposed to build an SRPM this way. If you build the SRPM *using mock* (the chroot tool used by Red Hat) then the build fails for a completely different reason. The chroot is not set up with time zone information and this triggers one of the regression tests. I'm filing a bug report with Red Hat. So hopefully we will see RPMs of R 3.4.0 for Fedora soon. Martyn > Martyn > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] R-3.4.0 fails test
> On 18 May 2017, at 14:51, peter dalgaard wrote: > > >> On 18 May 2017, at 13:47 , Joris Meys wrote: >> >> Correction: Also dlt uses the default timezone, but POSIXlt is not >> recalculated whereas POSIXct is. Reason for that is the different way values >> are stored (hours, minutes, seconds as opposed to minutes from origin, as >> explained in my previous mail) >> > > I would suspect that there is something more subtle going on, New Zealand > time is 10, 11, or 12 hours from Central European, depending on time of year > (10 in our Summer, 12 in theirs and 11 during the overlap at both ends, if > you must know), and we are talking a 1 hour difference. > > However DST transitions were both in March/April, so that's not it. Maybe a > POSIX[lc]t expert can comment? If I change the month from December to June then I see the same phenomenon in my Europe/Paris time zone. The issue seems to be that, for the date chosen for the test, Summer/daylight savings time is in force in NZ and some other parts of the southern hemisphere , but not in the northern hemisphere. Martyn > -pd > >> CHeers >> Joris >> >> On Thu, May 18, 2017 at 1:45 PM, Joris Meys wrote: >> This has to do with your own timezone. If I run that code on my computer, >> both formats are correct. If I do this after >> >> Sys.setenv(TZ = "UTC") >> >> Then: >> >>> cbind(format(dlt), format(dct)) >> [,1] [,2] >> [1,] "2016-12-06 21:45:41" "2016-12-06 20:45:41" >> [2,] "2016-12-06 21:45:42" "2016-12-06 20:45:42" >> >> The reason for that, is that dlt has a timezone set, but dct doesn't. To be >> correct, it only takes the first value "", which indicates "Use the default >> timezone of the locale". >> >>> attr(dlt, "tzone") >> [1] "" "CET" "CEST" >>> attr(dct, "tzone") >> [1] "" >> >> The thing is, in POSIXlt the timezone attribute is stored together with the >> actual values for hour, minute etc. in list format. Changing the timezone >> doesn't change those values, but it will change the time itself: >> >>> Sys.unsetenv("TZ") >>> dlt2 <- dlt >>> attr(dlt2,"tzone") <- "UTC" >>> dlt2 >> [1] "2016-12-06 21:45:41 UTC" "2016-12-06 21:45:42 UTC" >> [3] "2016-12-06 21:45:43 UTC" "2016-12-06 21:45:44 UTC" >> >> in POSIXct the value doesn't change either, just the attribute. But this >> value is the number of seconds since the origin. So the time itself doesn't >> change, but you'll see a different hour. >> >>> dct >> [1] "2016-12-06 21:45:41 CET" "2016-12-06 21:45:42 CET" >> ... >>> attr(dct,"tzone") <- "UTC" >>> dct >> [1] "2016-12-06 20:45:41 UTC" "2016-12-06 20:45:42 UTC" >> [3] "2016-12-06 20:45:43 UTC" "2016-12-06 20:45:44 UTC" >> >> So what you see, is simply the result of your timezone settings on your >> computer. >> >> Cheers >> Joris >> >>> On Thu, May 18, 2017 at 1:19 PM, peter dalgaard wrote: >>> >>> On 18 May 2017, at 11:00 , Patrick Connolly >>> wrote: >>> >>> On Wed, 17-May-2017 at 01:21PM +0200, Peter Dalgaard wrote: >>> >>> |> >>> |> Anyways, you might want to >>> |> >>> |> a) move the discussion to R-devel >>> |> b) include your platform (hardware, OS) and time zone info >>> >>> System:Host: MTA-V1-427894 Kernel: 3.19.0-32-generic x86_64 (64 bit >>> gcc: 4.8.2) >>> Desktop: KDE Plasma 4.14.2 (Qt 4.8.6) Distro: Linux Mint 17.3 Rosa >> >> I suppose that'll do... >> >> >>> Time zone: NZST >> >> >> >>> >>> |> c) run the offending code lines "by hand" and show us the values of >>> format(dlt) and format(dct) so we can see what the problem is, something >>> like >>> |> >>> |> dlt <- structure( >>> |> list(sec = 52, min = 59L, hour = 18L, mday = 6L, mon = 11L, year = >>> 116L, >>> |>wday = 2L, yday = 340L, isdst = 0L, zone = "CET", gmtoff = 3600L), >>> |>class = c("POSIXlt", "POSIXt"), tzone = c("", "CET", "CEST")) >>> |> dlt$sec <- 1 + 1:10 >>> |> dct <- as.POSIXct(dlt) >>> |> cbind(format(dlt), format(dct)) >>> cbind(format(dlt), format(dct)) >>> [,1] [,2] >>> [1,] "2016-12-06 21:45:41" "2016-12-06 22:45:41" >>> [2,] "2016-12-06 21:45:42" "2016-12-06 22:45:42" >>> [3,] "2016-12-06 21:45:43" "2016-12-06 22:45:43" >>> [4,] "2016-12-06 21:45:44" "2016-12-06 22:45:44" >>> [5,] "2016-12-06 21:45:45" "2016-12-06 22:45:45" >>> [6,] "2016-12-06 21:45:46" "2016-12-06 22:45:46" >>> [7,] "2016-12-06 21:45:47" "2016-12-06 22:45:47" >>> [8,] "2016-12-06 21:45:48" "2016-12-06 22:45:48" >>> [9,] "2016-12-06 21:45:49" "2016-12-06 22:45:49" >>> [10,] "2016-12-06 21:45:50" "2016-12-06 22:45:50" >>> >> >> >> So exactly 1 hour out of whack. Is there a Daylight Saving Times issue, >> perchance? >> >> -pd >> >> >>> >>> -- >>> ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. >>> ___Patrick Connolly >>> {~._.~} Great minds discuss ideas >>> _( Y )_Average minds discuss events >>> (:_~*~_:) Small minds discuss p
Re: [Rd] [R] R-3.4.0 fails test
I have fixed this in R-devel and will port it over to the R release branch in due course. The underlying issue is that the conversion from POSIXlt to POSIXct uses the local time zone and not the CET time zone. I believe this is a bug, but I will take up that discussion elsewhere. Martyn From: Patrick Connolly Sent: 19 May 2017 06:13 To: Martin Maechler Cc: peter dalgaard; Martyn Plummer; Joris Meys; R-devel Subject: Re: [Rd] [R] R-3.4.0 fails test On Thu, 18-May-2017 at 05:46PM +0200, Martin Maechler wrote: |> . |> |> Being pretty "stretched" time wise currently, I'm happy for |> timezone-portable propositions to change the test. Meantime, anyone who lives where DST happpens in December who wants to get through the remaining tests can avoid this one by changing the line > stopifnot(length(fd) == 10, identical(fd, format(dct <- as.POSIXct(dlt to > stopifnot(length(fd) == 10, identical(fd, format(dct <- as.POSIXlt(dlt (which effectively isn't testing anything much) A less lazy way would be to comment out the relevant lines. |> |> Martin -- ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. ___Patrick Connolly {~._.~} Great minds discuss ideas _( Y )_ Average minds discuss events (:_~*~_:) Small minds discuss people (_)-(_) . Eleanor Roosevelt ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Getting an R bugzilla account
Thanks for your help Nathan. I have added a bugzilla account for you. Martyn On Tue, 2017-05-23 at 21:02 +, Nathan Sosnovske via R-devel wrote: > Hi All, > > I have a fix to this bug ( https://bugs.r-project.org/bugzilla3/show_ > bug.cgi?id=16454) and would like to submit a patch to the bug report > on Bugzilla. I'd also like to start going through some of the other > Windows-specific issues and start fixing those. The bug submission > instructions indicate that I should ask here for a Bugzilla account. > Is that still the correct procedure? > > Thanks! > > Nathan > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Question about R developpment
I would describe MRO as a distribution of R, in the same way that Fedora, Debian, SUSE etc are distributions of Linux. It is not fundamentally different from the version of R that you can download from CRAN but the binary builds offer some specific features: 1) The binary build is linked to the Intel Math Kernel Library (MKL) which may increase the speed of some matrix operations 2) Packages are downloaded from MRAN, Microsoft's time-stamped copy of CRAN. This can help with reproducibility of analyses that rely on CRAN packages. As far as I know, all of the additional packages that are bundled with MRO are freely distributable and also available from CRAN. As Roy points out, both Microsoft and the R Foundation are partners in the R Consortium. So we do talk to each other as well as other stakeholders who participate in the Consortium. Martyn From: R-devel on behalf of Duncan Murdoch Sent: 11 June 2017 00:09 To: Morgan; r-devel@r-project.org Subject: Re: [Rd] Question about R developpment On 10/06/2017 2:38 PM, Morgan wrote: > Hi, > > I had a question that might not seem obvious to me. > > I was wondering why there was no patnership between microsoft the R core > team and eventually other developpers to improve R in one unified version > instead of having different teams developping their own version of R. As far as I know, there's only one version of R currently being developed. Microsoft doesn't offer anything different; they just offer a build of a slightly older version of base R, and a few packages that are not in the base version. Duncan Murdoch __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] hpc r configure not recognizing zlib
You do not need to compile R from source on RHEL 6. If you enable the EPEL repository then you can install the binary RPM via yum. See https://fedoraproject.org/wiki/EPEL Tom Callaway, who maintains the Red Hat binaries of R, statically links up-to-date versions of bzip2, xz, pcre, and curl into R. Martyn PS The R-SIG-Fedora list is a more appropriate mailing list for this topic: https://stat.ethz.ch/mailman/listinfo/r-sig-fedora On Thu, 2017-06-22 at 14:19 +, Hamidi, Bashir wrote: > System: Red Hat Enterprise Linux Server release 6.5 (Santiago) > > I’ve installed zlib 1.2.11 on the home folder of a Red Hat HPC as > part of the process for installing R base 3.4.0. > > I get this error even after successful install of zlib > checking for inflateInit2_ in -lz... no > checking whether zlib support suffices... configure: error: zlib > library and headers are required > > I’ve checked R documentation and configure file for the issue of R > requiring version newer than 1.2.6 but not lexicographically > recognizing 1.2.11 as >1.2.6, and that particular bug was patched in > 3.4. > > Any suggestion and/or input would be much appreciated. > > Regards, > > > Bashir > > > > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Duncan's retirement: who's taking over Rtools?
David, I think ideally we want to appoint a single person to take primary responsibility for the Windows builds. We are currently discussing this within R Core. We also recognize that Microsoft is a stakeholder in R for Windows. The same is true of other members of the R Consortium. Going forward, it would be useful if all stakeholders could contribute to both support and strategic discussions in this area. best Martyn On Thu, 2017-09-28 at 15:47 +, David Smith via R-devel wrote: > Likewise, a hearty THANK YOU from me and the rest of the team at > Microsoft for all the work you, Duncan, have put into making R > available for Windows users around the world over the past 15 years. > I know it wasn't easy (Windows is not without its quirks), but R > users everywhere, ourselves included, are deeply appreciative and > have benefited greatly. > > The Microsoft R team is willing and able to produce builds for R on > Windows going forward. As Duncan noted, we've been doing this already > for some time for MRAN. I'd love to hear thoughts from this community > on what that might mean, and Duncan I'll also reach out to you > directly off-list. > > Cheers, > # David > > -Original Message- > From: R-devel [mailto:r-devel-boun...@r-project.org] On Behalf Of > Joris Meys > Sent: Thursday, September 28, 2017 08:28 > To: r-devel@r-project.org > Subject: [Rd] Duncan's retirement: who's taking over Rtools? > > Dear dev team, > > I was sorry to see the announcement of Duncan about his retirement > from maintaining the R Windows build and Rtools. Duncan, thank you > incredibly much for your 15 years of devotion and your impressive > contribution to the R community as a whole. > > Thinking about the future, I wondered whether there were plans for > the succession of Duncan. Is it the intention to continue providing > Rtools and a Windows build, or are these tasks left open for anyone > (possibly Microsoft itself) to take them over? And if so, how will > the decision be made on that? > > Cheers > Joris > -- > --- > > Biowiskundedagen 2017-2018 > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.b > iowiskundedagen.ugent.be%2F&data=02%7C01%7Cdavidsmi%40microsoft.com%7 > C2fd515da9138451611b508d50685822b%7C72f988bf86f141af91ab2d7cd011db47% > 7C1%7C0%7C636422092884858146&sdata=BK7GESC6ladsk6cig0ima%2BbdV1sQ5Gde > ng%2BhWvtgwj4%3D&reserved=0 > > --- > > Joris Meys > Statistical consultant > > Ghent University > Faculty of Bioscience Engineering > Department of Mathematical Modelling, Statistics and Bio-Informatics > > tel : +32 (0)9 264 61 79 > joris.m...@ugent.be > --- > Disclaimer : https://na01.safelinks.protection.outlook.com/?url=http% > 3A%2F%2Fhelpdesk.ugent.be%2Fe- > maildisclaimer.php&data=02%7C01%7Cdavidsmi%40microsoft.com%7C2fd515da > 9138451611b508d50685822b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7 > C636422092884858146&sdata=PFbW9gv7%2Byi6puj42LyWHPPBqeYd83L3oQunaLTTS > nw%3D&reserved=0 > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat > .ethz.ch%2Fmailman%2Flistinfo%2Fr- > devel&data=02%7C01%7Cdavidsmi%40microsoft.com%7C2fd515da9138451611b50 > 8d50685822b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636422092884 > 858146&sdata=7ZzH9QJUaGLOIR8u2b72PMK6ze7r7hk0mleytyLC7pk%3D&reserved= > 0 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Illegal Logical Values
On Fri, 2017-10-20 at 14:01 +, brodie gaslam via R-devel wrote: > I'm wondering if WRE Section 5.2 should be a little more explicit > about misuse of integer values other than NA, 0, and 1 in LGLSXPs. > I'm thinking of this passage: > > > Logical values are sent as 0 (FALSE), 1 (TRUE) or INT_MIN = > > -2147483648 (NA, but only if NAOK is true), and the compiled code > > should return one of these three values. (Non-zero values other > > than INT_MIN are mapped to TRUE.) > > The parenthetical seems to suggest that something like 'LOGICAL(x)[0] > = 2;' will be treated as TRUE, which it sometimes is, and sometimes > isn't: The title of Section 5.2 is "Interface functions .C and .Fortran" and the text above refers to those interfaces. It explains how logical vectors are mapped to C integer arrays on entry and back again on exit. This does work as advertised. Here is a simple example. File "nottrue.c" contains the text void nottrue(int *x) { x[0] = 2; } This is compiled with "R CMD SHLIB nottrue.c" to created the shared object "nottrue.so" > dyn.load("nottrue.so") > a <- .C("nottrue", x=integer(1)) > a $x [1] 2 > a <- .C("nottrue", x=logical(1)) > a $x [1] TRUE > isTRUE(a$x) [1] TRUE > as.integer(a) [1] 1 So for a logical argument, the integer value 2 is mapped back to a valid value on return. > not.true <- inline::cfunction(body=' > SEXP res = allocVector(LGLSXP, 1); > LOGICAL(res)[0] = 2; > return res;' > )() > not.true > ## [1] TRUE > not.true == TRUE > ## [1] FALSE > not.true[1] == TRUE # due to scalar subset handling > ## [1] TRUE > not.true == 2L > ## [1] TRUE In your last example, not.true is coerced to integer (as explain in the help for ("==") and its integer value of 2 is recovered. > Perhaps a more explicit warning that using anything other than 0, 1, > or NA is undefined behavior is warranted? Obviously people should > know better than to expect correct behavior, but the fact that the > behavior is correct in some cases (e.g. printing, scalar subsetting) > might be confusing. Yes if people are tripping up on this then we could clarify that the .Call interface does not remap logical vectors on exit. Hence assignment of any value other than 0, 1 or INT_MIN to the elements of a logical vector may cause unexpected behaviour when this vector is returned to R. Martyn > > > Best, > B.rodie. > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problem following an R bug fix to integrate()
On Tue, 2013-07-16 at 13:55 +0200, Hans W Borchers wrote: > I have been told by the CRAN administrators that the following code generated > an error on 64-bit Fedora Linux (gcc, clang) and on Solaris machines (sparc, > x86), but runs well on all other systems): > > > fn <- function(x, y) ifelse(x^2 + y^2 <= 1, 1 - x^2 - y^2, 0) > > > tol <- 1.5e-8 > > fy <- function(x) integrate(function(y) fn(x, y), 0, 1, > subdivisions = 300, rel.tol = tol)$value > > Fy <- Vectorize(fy) > > > xa <- -1; xb <- 1 > > Q <- integrate(Fy, xa, xb, > subdivisions = 300, rel.tol = tol)$value > > Error in integrate(Fy, xa, xb, subdivisions = 300, rel.tol = tol) : > roundoff error was detected > > Obviously, this realizes a double integration, split up into two 1-dimensional > integrations, and the result shall be pi/4. I wonder what a 'roundoff error' > means in this situation. > > In my package, this test worked well, w/o error or warnings, since July 2011, > on Windows, Max OS X, and Ubuntu Linux. I have no chance to test it on one of > the above mentioned systems. Of course, I can simply disable these tests, but > I would not like to do so w/o good reason. > > If there is a connection to a bug fix to integrate(), with NEWS item > > "integrate() reverts to the pre-2.12.0 behaviour. (PR#15219)", > > then I do not understand what this pre-2.12.0 behavior really means. > > Thanks for any help or a hint to what shall be changed. You can see the bug report here: https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15219 It concerns the behaviour of integrate with a small error tolerance. >From 2.12.0 to 3.0.1 integrate was not working correctly with small error tolerance values, in the sense that small values did not improve accuracy and the accuracy was mis-reported. The tolerance in your example (1.5e-8) is considerably smaller than the default (1.2e-4). My guess is that the rounding error always existed but was not detected due to the bug. You might try a larger tolerance. I have tested your example and increasing the tolerance to 1.5e-7 removes the error. Martyn > Hans W Borchers > > PS: > This kind of tricky definition in function 'fn' has caused some discussion on > this list in July 2009. I still think it should be allowed to proceed in this > way. > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Correct NAMESPACE approach when writing an S3 method for a generic in another package
I think rgl should be in Depends. You are providing a method for a generic function from another package. In order to use your method, you want the user to be able to call the generic function without scoping (i.e. without calling rgl::plot3d), so the generic should be on the search path, so the package that provides it should be listed in Depends in the NAMESPACE file. Martyn On Fri, 2013-08-23 at 22:01 -0600, Gavin Simpson wrote: > Dear List, > > In one of my packages I have an S3 method for the plot3d generic > function from package rgl. I am trying to streamline my Depends > entries but don't know how to have > > plot3d(foo) > > in the examples section for the plot3d method in my package, without > rgl being in Depends. > > Note that I importFrom(rgl, plotd3d) and register my S3 method via > S3Method() in the NAMESPACE. > > If rgl is not in Depends but in Imports, I see this when checking the package > > > ## 3D plot of data with curve superimposed > > plot3d(aber.pc, abernethy2) > Error: could not find function "plot3d" > > I presume this is because rgl's namespace is only loaded but the > package is not attached to the search path. > > Writing R extensions indicates that one can export from a namespace > something that was imported from another package namespace. I thought > that might help the situation, and now the code doesn't raise an > error, I get > > * checking for missing documentation entries ... WARNING > Undocumented code objects: > ‘plot3d’ > All user-level objects in a package should have documentation entries. > See the chapter ‘Writing R documentation files’ in the ‘Writing R > Extensions’ manual. > > as I don't document plot3d() itself. > > What is the recommended combination of Depends and Imports plus > NAMESPACE directives etc that one should use in this situation? Or am > I missing something else? > > I have a similar issue with my package including an S3 method for a > generic in the lattice package, so if possible I could get rid of both > of these from Depends if I can solve the above issue. > > Thanks in advance. > > Gavin > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] C++ debugging help needed
In C++, everything goes in the global namespace unless the programmer explicitly creates one. So when you dynamically load two dynamic shared libraries with a "Shape" object they clash. The solution here is to put namespace rgl { ... } around your class definitions in the rglm package, and using rgl::Shape at the top of any source file that refers to rgl Shape. Likewise, the igraph package should declare shape in the "igraph" namespace. Martyn On Wed, 2013-10-02 at 11:05 -0400, Duncan Murdoch wrote: > A quick addition: > > If I add > > #define Shape rglShape > > near the top of my Shape.hpp header file, the bug goes away. But I > can't believe that would be necessary. These are in separate packages, > don't they have separate namespaces in C++? How can I avoid clashes > with types declared in other packages in the future? > > Duncan Murdoch > > On 02/10/2013 10:50 AM, Duncan Murdoch wrote: > > I've had reports lately about segfaults in the rgl package. I've only > > been able to reproduce these on Linux. I am not so familiar with C++ > > details, so I have a couple of questions way down below. But first some > > background info. > > > >One recipe to recreate the crash works with a new version 5.0-1 of the > > mixOmics package: > > > > > library(mixOmics) > > > example(pca) > > > > This crashes with messages like this: > > > > Program received signal SIGSEGV, Segmentation fault. > > 0x728aafd9 in __exchange_and_add (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:48 > > 48{ return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } > > > > The call stack ends with this: > > > > #0 0x728aafd9 in __exchange_and_add (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:48 > > #1 __exchange_and_add_dispatch (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:81 > > #2 _M_dispose (__a=..., this=0x7f7f7f7fffe7) > > at /usr/include/c++/4.7/bits/basic_string.h:242 > > #3 ~basic_string (this=0x15f8770, __in_chrg=) > > at /usr/include/c++/4.7/bits/basic_string.h:536 > > #4 Shape::~Shape (this=0x15f8760, __in_chrg=) at > > Shape.cpp:13 > > #5 0x722df50b in ~Background (this=0x15f8760, > > __in_chrg=) at Background.hpp:15 > > #6 Background::~Background (this=0x15f8760, __in_chrg=) > > at Background.hpp:15 > > > > Up to entry #4 this all looks normal. If I go into that stack frame, I > > see this: > > > > > > (gdb) up > > #4 Shape::~Shape (this=0x15f8760, __in_chrg=) at > > Shape.cpp:13 > > warning: Source file is more recent than executable. > > 13blended(in_material.isTransparent()) > > (gdb) p this > > $9 = (Shape * const) 0x15f8760 > > (gdb) p *this > > $10 = {_vptr.Shape = 0x72d8e290, mName = 6, mType = { > > static npos = , > > _M_dataplus = {> = > > {<__gnu_cxx::new_allocator> = > > {}, }, > > _M_p = 0x7f7f7f7f > bounds>}}, > > mShapeColor = {mRed = -1.4044474254567505e+306, > > mGreen = -1.4044477603031902e+306, mBlue = 4.24399170841135e-314, > > mTransparent = 0}, mSpecularReflectivity = 0.0078125, > > mSpecularSize = 1065353216, mDiffuseReflectivity = 0.007812501848093234, > > mAmbientReflectivity = 0} > > > > The things displayed in *this are all wrong. Those field names come > > from the Shape object in the igraph package, not the Shape object in the > > rgl package. The mixOmics package uses both. > > > > My questions: > > > > - Has my code somehow got mixed up with the igraph code, so I really do > > have a call out to igraph's Shape::~Shape instead of rgl's > > Shape::~Shape, or is this just bad info being given to me by gdb? > > > > - If I really do have calls to the wrong destructor in there, how do I > > avoid this? > > > > Duncan Murdoch > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] version comparison puzzle
It's an underflow problem. When comparing versions, "a.b.c" is converted first to the integer vector c(a,b,c) and then to the double precision value a + b/base + c/base^2 where base is 1 greater than the largest integer component of any of the versions: i.e 99912 in this case. The last term is then smaller than the machine precision so you can't tell the difference between 1.0.4 and 1.0.5. Martyn On Wed, 2013-10-02 at 23:41 -0400, Ben Bolker wrote: > Can anyone explain what I'm missing here? > > max(pp1 <- package_version(c("0.9911.3","1.0.4","1.0.5"))) > ## [1] ‘1.0.4’ > > max(pp2 <- package_version(c("1.0.3","1.0.4","1.0.5"))) > ## [1] ‘1.0.5’ > > I've looked at ?package_version , to no avail. > > Since max() goes to .Primitive("max") > I'm having trouble figuring out where it goes from there: > I **think** this is related to ?xtfrm , which goes to > .encode_numeric_version, which is doing something I really > don't understand (it's in base/R/version.R ...) > > .encode_numeric_version(pp1) > ## [1] 1 1 1 > ## attr(,"base") > ## [1] 9912 > ## attr(,"lens") > ## [1] 3 3 3 > ## attr(,".classes") > ## [1] "package_version" "numeric_version" > > .encode_numeric_version(pp2) > ## [1] 1.08 1.11 1.138889 > ## attr(,"base") > ## [1] 6 > ## attr(,"lens") > ## [1] 3 3 3 > ## attr(,".classes") > ## [1] "package_version" "numeric_version" > > sessionInfo() > R Under development (unstable) (2013-09-09 r63889) > Platform: i686-pc-linux-gnu (32-bit) > > [snip] > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > loaded via a namespace (and not attached): > [1] compiler_3.1.0 tools_3.1.0 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] C++ debugging help needed
Yes, on reflection it's an ABI problem on Linux (use of PIC code in shared libraries means that any symbol can be interposed). Using namespaces isn't really the answer because that's an API issue. I think what you really need to do is control the visibility of your classes and functions so that everything is hidden except for the entry points you call from R (Writing R Extensions section 6.15). This should stop the symbol collision because hidden functions are resolved inside the shared object instead of going through a lookup table that can be overwritten by someone else's package. Martyn On Fri, 2013-10-04 at 15:05 -0400, Duncan Murdoch wrote: > I have now got two "solutions" to this. The rgl version currently on > CRAN does a simple rename to avoid the name clash. A later version, > still only on R-forge, puts most objects into a namespace called "rgl". > (The old code had two small namespaces "gui" and "lib"; they are gone now.) > > I am not yet confident that the current version with namespaces will > compile on all platforms; it seems much more fragile this way, with > errors showing up on Linux that were not errors on Windows. (E.g. > sometimes I included a header with declarations in the rgl namespace > followed by system header files, and the latter acted differently than > they did when placed before the rgl header file, apparently declaring > the system functions to be in a new anonymous namespace.) > > rgl also includes some C code from the gl2ps project and some C++ code > from FTGL; I didn't put those into the rgl namespace. So there are > still possibilities for clashes if anyone else uses those. > > I'm still surprised that anything with plugins works on Unix-alike > systems with such bizarre linking rules. This is one of those few cases > where the Windows design seems clearly superior. > > Duncan Murdoch > > > On 02/10/2013 10:50 AM, Duncan Murdoch wrote: > > I've had reports lately about segfaults in the rgl package. I've only > > been able to reproduce these on Linux. I am not so familiar with C++ > > details, so I have a couple of questions way down below. But first some > > background info. > > > >One recipe to recreate the crash works with a new version 5.0-1 of the > > mixOmics package: > > > > > library(mixOmics) > > > example(pca) > > > > This crashes with messages like this: > > > > Program received signal SIGSEGV, Segmentation fault. > > 0x728aafd9 in __exchange_and_add (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:48 > > 48{ return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); } > > > > The call stack ends with this: > > > > #0 0x728aafd9 in __exchange_and_add (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:48 > > #1 __exchange_and_add_dispatch (__mem=0x7f7f7f77, > > __val=) at /usr/include/c++/4.7/ext/atomicity.h:81 > > #2 _M_dispose (__a=..., this=0x7f7f7f7fffe7) > > at /usr/include/c++/4.7/bits/basic_string.h:242 > > #3 ~basic_string (this=0x15f8770, __in_chrg=) > > at /usr/include/c++/4.7/bits/basic_string.h:536 > > #4 Shape::~Shape (this=0x15f8760, __in_chrg=) at > > Shape.cpp:13 > > #5 0x722df50b in ~Background (this=0x15f8760, > > __in_chrg=) at Background.hpp:15 > > #6 Background::~Background (this=0x15f8760, __in_chrg=) > > at Background.hpp:15 > > > > Up to entry #4 this all looks normal. If I go into that stack frame, I > > see this: > > > > > > (gdb) up > > #4 Shape::~Shape (this=0x15f8760, __in_chrg=) at > > Shape.cpp:13 > > warning: Source file is more recent than executable. > > 13blended(in_material.isTransparent()) > > (gdb) p this > > $9 = (Shape * const) 0x15f8760 > > (gdb) p *this > > $10 = {_vptr.Shape = 0x72d8e290, mName = 6, mType = { > > static npos = , > > _M_dataplus = {> = > > {<__gnu_cxx::new_allocator> = > > {}, }, > > _M_p = 0x7f7f7f7f > bounds>}}, > > mShapeColor = {mRed = -1.4044474254567505e+306, > > mGreen = -1.4044477603031902e+306, mBlue = 4.24399170841135e-314, > > mTransparent = 0}, mSpecularReflectivity = 0.0078125, > > mSpecularSize = 1065353216, mDiffuseReflectivity = 0.007812501848093234, > > mAmbientReflectivity = 0} > > > > The things displayed in *this are all wrong. Those field names come > > from the Shape object in the igraph package, not the Shape object in the > > rgl package. The mixOmics package uses both. > > > > My questions: > > > > - Has my code somehow got mixed up with the igraph code, so I really do > > have a call out to igraph's Shape::~Shape instead of rgl's > > Shape::~Shape, or is this just bad info being given to me by gdb? > > > > - If I really do have calls to the wrong destructor in there, how do I > > avoid this? > > > > Duncan Murdoch > > __ > R-devel@r-project.org mailing list > https://stat.et
Re: [Rd] R 3.1.0 and C++11
I don't see any harm in allowing optional C++11 support, and it is no trouble to update the documentation to acknowledge the existence of C++11 conforming compilers. However, the questions of what is possible, what is recommended, and what is required for CRAN submissions are distinct. I have a couple of comments on the macro: a) Your version implies mandatory C++11 support. One needs AX_CXX_COMPILE_STDCXX_11(noext,optional) for optional support. b) I find it unhelpful that the macro picks up the partial C++11 support in gcc 4.7 via the -std=c++0x flag, so I would edit (and rename) the macro to remove this. Martyn From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on behalf of Dirk Eddelbuettel [e...@debian.org] Sent: 07 October 2013 01:54 To: R-devel org Subject: [Rd] R 3.1.0 and C++11 I would like to bring up two issues concerning C++11. First, the R-devel manuals contain incorrect statements regarding C++11: i) R-exts.texi: Although there is a 2011 version of the C++ standard, it is not yet fully implemented (nor is it likely to be widely available for some years) and portable C++ code needs to follow the 1998 standard (and not use features from C99). ii) R-ints.texi: The type `R_xlen_t' is made available to packages in C header `Rinternals.h': this should be fine in C code since C99 is required. People do try to use R internals in C++, but C++98 compilers are not required to support these types (and there are currently no C++11 compilers). But since the summer we have g++ and clang with working C++11 implementations: iii) g++ implements C++11: http://isocpp.org/blog/2013/05/gcc-4.8.1-released-c11-feature-complete iv) llvm/clang++ implements C++11: http://isocpp.org/blog/2013/06/llvm-3.3-is-released I would suggest to change the wording prior to the release of R 3.1.0 next year as it is likely that even Microsoft will by then have a fully-conformant compiler (per Herb Sutter at a recent talk in Chicago). If it helped, I would be glad to provide minimal patches to the two .texi files. Moreover, the C++ Standards Group is working towards closing the delta between standards being adopted, and compilers being released. They expect corresponding compilers for C++14 (a "patch" release for C++11 expected to be ready next spring) to be available within a year---possibly during 2014. Second, the current R Policy regarding C++11 is unnecessarily strict. I would propose to treat the availability of C++11 extensions more like the availability of OpenMP: something which configure can probe at build time, and which can be deployed later via suitable #ifdef tests. As a proof of concept, I added this macro from the autoconf archive to the m4/ directory of R-devel: http://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx_11.html and made a one-line change to configure.ac (indented two spaces just for email) edd@max:~/svn/r-devel$ svn di configure.ac Index: configure.ac === --- configure.ac (revision 64031) +++ configure.ac (working copy) @@ -906,6 +906,7 @@ AC_LANG_PUSH(C++) AC_OPENMP +AX_CXX_COMPILE_STDCXX_11(noext) AC_LANG_POP(C++) ### *** ObjC compiler edd@max:~/svn/r-devel$ After running 'aclocal -Im4; autoheader; autoconf', the configure test then properly detected C++11 (or, in one case, C++0x) on four different compilers: [ g++-4.7 case, Ubuntu 13.04 ] checking whether g++ supports C++11 features by default... no checking whether g++ supports C++11 features with -std=c++11... no checking whether g++ supports C++11 features with -std=c++0x... yes [ CC=clang CXX=clang++ (3.1), Ubuntu 13.04 ] checking whether clang++ accepts -M for generating dependencies... yes checking for clang++ option to support OpenMP... unsupported checking whether clang++ supports C++11 features by default... no checking whether clang++ supports C++11 features with -std=c++11... yes [ g++-4.8 case, Debian testing ] checking whether g++ supports C++11 features by default... no checking whether g++ supports C++11 features with -std=c++11... yes [ CC=clang CXX=clang++ (3.2), Debian testing ] checking whether clang++ supports C++11 features by default... no checking whether clang++ supports C++11 features with -std=c++11... yes It would be easy to another #define to config.h.in. And of course, I understand that R Core is comprised primarily of C programmers. But to those of us who lean more towards C++ than C, the step towards C++11 is a big one, and a very exciting one. More and more upstream authors are considering right now whether to switch to C++11-only. I expect such switches to become more common as time pass. C++11 provides a lot -- and preventing programmers from using these tools cannot be in our interest. I
Re: [Rd] Multivariate time series in R 3 vs R 2
This has nothing to do with changes in base R. It is due to changes in the dependent packages. These changes mean that when you call lapply it does not dispatch the right as.list method. The method you want (as.list.ts) is provided by the zoo package. It splits a multivariate time series into a list of univariate time series in the way you are expecting. Your package mar1s used to depend on zoo indirectly through the fda package. But now fda does not depend on zoo, it only suggests it. So now, when you load your package, zoo is not on the search path and you get the default as.list method, which produces the bad results. The solution is to add "Imports: zoo" to your DESCRIPTION file and "import(zoo)" to your NAMESPACE file. Martyn On Wed, 2013-10-23 at 22:56 +0400, Андрей Парамонов wrote: > Hello! > > Recently I got report that my package mar1s doesn't pass checks any more on > R 3.0.2. I started to investigate and found the following difference in > multivariate time series handling in R 3.0.2 compared to R 2 (I've checked > on 2.14.0). > > Suppose I wish to calculate seasonal component for time series. In case of > multivariate time series, I wish to process each column independently. Let > f be a simple (trivial) model of seasonal component: > > f <- function(x) > return(ts(rep(0, length(x)), start = 0, frequency = frequency(x))) > > In previous versions of R, I used the following compact and efficient > expression to calculate seasonal component: > > y <- do.call(cbind, lapply(x, f)) > > It worked equally good for univariate and multivariate time series: > > > R.Version()$version.string > [1] "R version 2.14.0 (2011-10-31)" > > t <- ts(1:10, start = 100, frequency = 10) > > > > x <- t > > y <- do.call(cbind, lapply(x, f)) > > y > Time Series: > Start = c(0, 1) > End = c(0, 10) > Frequency = 10 > [1] 0 0 0 0 0 0 0 0 0 0 > > > > x <- cbind(t, t) > > y <- do.call(cbind, lapply(x, f)) > > y > Time Series: > Start = c(0, 1) > End = c(0, 10) > Frequency = 10 > t t > 0.0 0 0 > 0.1 0 0 > 0.2 0 0 > 0.3 0 0 > 0.4 0 0 > 0.5 0 0 > 0.6 0 0 > 0.7 0 0 > 0.8 0 0 > 0.9 0 0 > > But in version 3, I get some frustrating results: > > > R.Version()$version.string > [1] "R version 3.0.2 (2013-09-25)" > > t <- ts(1:10, start = 100, frequency = 10) > > > > x <- t > > y <- do.call(cbind, lapply(x, f)) > > y > Time Series: > Start = 0 > End = 0 > Frequency = 1 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > > > > x <- cbind(t, t) > > y <- do.call(cbind, lapply(x, f)) > > y > Time Series: > Start = 0 > End = 0 > Frequency = 1 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0 0 > structure(0, .Tsp = c(0, 0, 1), class = "ts") > 0
Re: [Rd] valgrind and C++
I think the server that runs the valgrind checks is still running the old version of your package (2.17) not the new one (2.18). Wait for an update. Martyn On Mon, 2014-03-17 at 17:26 +, Jarrod Hadfield wrote: > Hi, > > I am sorry if this is perceived as a C++ question rather than an R > question. After uploading an R library to CRAN (MCMCglmm) the C++ code > failed to pass the memory checks. The errors come in pairs like: > > Mismatched free() / delete / delete [] > at 0x4A077E6: free (vg_replace_malloc.c:446) > by 0x144FA28E: MCMCglmm (MCMCglmm.cc:2184) > > > Address 0x129850c0 is 0 bytes inside a block of size 4 alloc'd > at 0x4A07CE4: operator new[](unsigned long) (vg_replace_malloc.c:363) > by 0x144F12B7: MCMCglmm (MCMCglmm.cc:99) > > which is associated with lines allocating and freeing memory (nG is an > integer): > > int *keep = new int [nG]; > > and > > delete [] keep; > > To me this looks fine, and on my machine (Scientific Linux 6.4) using > gcc 4.4.7-3 and valgrind 1:3.8.1-3.2 I get no such errors. Its not > clear to me which flavour of Linux or compiler the CRAN team used, > although from MCMCglmm-Ex.Rout I can see the same version of valgrind > was used. Any insight would be very welcome. > > Kind Regards, > > Jarrod > > > > > > > > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] CXX_STD and configure.ac in packages
Hi Martin, Thanks for the patch. I have applied it. I also added CXX1X and friends to the list of approved variables for R CMD config. So you can now query the existence of C++11 support with `R CMD config CXX1X` (It is empty if C++11 support is not available) and then take appropriate action in your configure script if, in Dirk's words, you want to do the configure dance. The philosophy underlying C++ support in R is that there are only two standards - C++98 and C++11 - and that you should write to one of those standards. Nobody should be writing new code that uses TR1 extensions now: they are superseded by the new standard. The map and unordered_map classes are a corner case, as they offer the same functionality but latter has much better complexity guarantees, so it is tempting to use it when available. But from a global perspective you should think of C++98 and C++11 as two different languages. Martyn From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on behalf of Romain Francois [rom...@r-enthusiasts.com] Sent: 31 March 2014 08:22 To: Martin Morgan Cc: R-devel Subject: Re: [Rd] CXX_STD and configure.ac in packages Hi, My advice would be to use SystemRequirements: C++11 As is definitely a part of C++11, assuming this version of the standard gives it to you. Your package may not compile on platforms where a C++11 compiler is not available, but perhaps if this becomes a pattern, then such compilers will start to be available, as in the current version of OSX and recent enough versions of various linux distributions. The subset of feature that the version of gcc gives you with Rtools might be enough. Alternatively, if you use Rcpp, you can use the RCPP_UNORDERED_MAP macro which will expand to either unordered_map or tr1::unordered_map, all the condition compiling is done in Rcpp. Romain Le 30 mars 2014 à 21:50, Martin Morgan a écrit : > In C++ code for use in a R-3.1.0 package, my specific problem is that I would > like to use if it is available, or if > not, or if all else fails. > > I (think I) can accomplish this with configure.ac as > > AC_INIT("DESCRIPTION") > > CXX=`"${R_HOME}/bin/R" CMD config CXX` > CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS` > > AC_CONFIG_HEADERS([src/config.h]) > AC_LANG(C++) > AC_CHECK_HEADERS([unordered_map tr1/unordered_map]) > AC_OUTPUT > > Use of configure.ac does not seem to be entirely consistent with section > 1.2.4 of Writing R Extensions, where one is advised that to use C++(11? see > below) code one should > >CXX_STD = CXX11 > > in Makevars(.win). My code does not require a compiler that supports the full > C++11 feature set. In addition, I do not understand the logic of setting a > variable that influences compiler flags in Makevars -- configure.ac will see > a compiler with inaccurate flags. > > Is use of configure.ac orthogonal to setting CXX_STD=CXX11? > > Some minor typos: > > /R-3-1-branch$ svn diff > Index: doc/manual/R-exts.texi > === > --- doc/manual/R-exts.texi(revision 65339) > +++ doc/manual/R-exts.texi(working copy) > @@ -2250,7 +2250,7 @@ > @subsection Using C++11 code > > @R{} can be built without a C++ compiler although one is available > -(but not necessarily installed) or all known @R{} platforms. > +(but not necessarily installed) on all known @R{} platforms. > For full portability across platforms, all > that can be assumed is approximate support for the C++98 standard (the > widely used @command{g++} deviates considerably from the standard). > @@ -2272,7 +2272,7 @@ > support a flag @option{-std=c++0x}, but the latter only provides partial > support for the C++11 standard. > > -In order to use C++ code in a package, the package's @file{Makevars} > +In order to use C++11 code in a package, the package's @file{Makevars} > file (or @file{Makevars.win} on Windows) should include the line > > @example > @@ -2329,7 +2329,7 @@ > anything other than the GNU version of C++98 and GNU extensions (which > include TR1). The default compiler on Windows is GCC 4.6.x and supports > the @option{-std=c++0x} flag and some C++11 features (see > -@uref{http://gcc.gnu.org/gcc-4.6/cxx0x_status.html}. On these > +@uref{http://gcc.gnu.org/gcc-4.6/cxx0x_status.html}). On these > platforms, it is necessary to select a different compiler for C++11, as > described above, @emph{via} personal @file{Makevars} files. For > example, on OS X 10.7 or later one could select @command{clang++}. > > -- > Computational Biology / Fred Hutchinson Cancer Research Center > 1100 Fairview Ave. N. > PO Box 19024 Seattle, WA 98109 > > Location: Arnold Building M1 B861 > Phone: (206) 667-2793 > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://
Re: [Rd] CXX_STD and configure.ac in packages
On Mon, 2014-03-31 at 07:09 +, Martyn Plummer wrote: > Hi Martin, > > Thanks for the patch. I have applied it. I also added CXX1X and friends to > the list of approved variables for R CMD config. > So you can now query the existence of C++11 support with `R CMD config CXX1X` > (It is empty if C++11 support is not available) > and then take appropriate action in your configure script if, in Dirk's > words, you want to do the configure dance. > > The philosophy underlying C++ support in R is that there are only two > standards - C++98 and C++11 - and that > you should write to one of those standards. A should add a clarification. The way I wrote this makes it sound like an even-handed choice, but only C++98 has cross-platform support. If you use C++11 then many users will not currently be able to use your code. > Nobody should be writing new code that uses TR1 extensions now: they are > superseded by the new standard. > > The map and unordered_map classes are a corner case, as they offer the same > functionality but latter has much better > complexity guarantees, so it is tempting to use it when available. But from > a global perspective you should think of > C++98 and C++11 as two different languages. > > Martyn > > > > From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on behalf > of Romain Francois [rom...@r-enthusiasts.com] > Sent: 31 March 2014 08:22 > To: Martin Morgan > Cc: R-devel > Subject: Re: [Rd] CXX_STD and configure.ac in packages > > Hi, > > My advice would be to use SystemRequirements: C++11 > > As is definitely a part of C++11, assuming this version of > the standard gives it to you. Your package may not compile on platforms where > a C++11 compiler is not available, but perhaps if this becomes a pattern, > then such compilers will start to be available, as in the current version of > OSX and recent enough versions of various linux distributions. > > The subset of feature that the version of gcc gives you with Rtools might be > enough. > > Alternatively, if you use Rcpp, you can use the RCPP_UNORDERED_MAP macro > which will expand to either unordered_map or tr1::unordered_map, all the > condition compiling is done in Rcpp. > > Romain > > Le 30 mars 2014 à 21:50, Martin Morgan a écrit : > > > In C++ code for use in a R-3.1.0 package, my specific problem is that I > > would like to use if it is available, or > > if not, or if all else fails. > > > > I (think I) can accomplish this with configure.ac as > > > > AC_INIT("DESCRIPTION") > > > > CXX=`"${R_HOME}/bin/R" CMD config CXX` > > CXXFLAGS=`"${R_HOME}/bin/R" CMD config CXXFLAGS` > > > > AC_CONFIG_HEADERS([src/config.h]) > > AC_LANG(C++) > > AC_CHECK_HEADERS([unordered_map tr1/unordered_map]) > > AC_OUTPUT > > > > Use of configure.ac does not seem to be entirely consistent with section > > 1.2.4 of Writing R Extensions, where one is advised that to use C++(11? see > > below) code one should > > > >CXX_STD = CXX11 > > > > in Makevars(.win). My code does not require a compiler that supports the > > full C++11 feature set. In addition, I do not understand the logic of > > setting a variable that influences compiler flags in Makevars -- > > configure.ac will see a compiler with inaccurate flags. > > > > Is use of configure.ac orthogonal to setting CXX_STD=CXX11? > > > > Some minor typos: > > > > /R-3-1-branch$ svn diff > > Index: doc/manual/R-exts.texi > > === > > --- doc/manual/R-exts.texi(revision 65339) > > +++ doc/manual/R-exts.texi(working copy) > > @@ -2250,7 +2250,7 @@ > > @subsection Using C++11 code > > > > @R{} can be built without a C++ compiler although one is available > > -(but not necessarily installed) or all known @R{} platforms. > > +(but not necessarily installed) on all known @R{} platforms. > > For full portability across platforms, all > > that can be assumed is approximate support for the C++98 standard (the > > widely used @command{g++} deviates considerably from the standard). > > @@ -2272,7 +2272,7 @@ > > support a flag @option{-std=c++0x}, but the latter only provides partial > > support for the C++11 standard. > > > > -In order to use C++ code in a package, the package's @file{Makevars} > > +In order to use C++11 code in a package, the package's @file{Makevars} > > file (or @file{Makevars.win} on Windows) should include the line
Re: [Rd] R 3.1.0 and C++11
On Thu, 2014-04-10 at 07:22 -0500, Dirk Eddelbuettel wrote: > On 2 December 2013 at 07:04, Dirk Eddelbuettel wrote: > | > | Following up on the thread spawned a while back, I just wanted to say that I > | appreciate today's RSS serving of R-devel NEWS: > | > |CHANGES IN R-devel PACKAGE INSTALLATION > | > |There is _experimental_ support for compiling C++11 code in packages. The > |file ‘src/Makevars’ or ‘src/Makevars.win’ should define the macro > |‘USE_CXX11 = true’. Where needed, an alternative C++11 compiler can be > |specified by setting macros ‘CXX11’, ‘CXX11FLAGS’ and so on, either when > R > |is configured or in a personal ‘Makevars’ file. (The default is to use > |‘$(CXX) -std=c++11’.) > | > | Thanks for initial and incremental changes. They are appreciated. > > And now a big thanks to Martyn and anybody else in R Core who pushed this > through to the R 3.1.0 release this morning. Credit it due to Brian here. Martyn > Having Makevars to let us say CXX_STD = CXX11 (plus the other variants) is a > real step forward, and relying on the information gleaned at configuration > time for R is sensible too. > > It's really good to have this, so thanks again. > > Dirk > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.1.0 and C++11
On Thu, 2014-04-10 at 11:14 -0400, Gabor Grothendieck wrote: > On Tue, Oct 29, 2013 at 1:58 AM, wrote: > > Le 2013-10-29 03:01, Whit Armstrong a écrit : > > > >> I would love to see optional c++0x support added for R. > > > > > > c++0x was the name given for when this was in development. Now c++11 is a > > published standard backed by implementations by major compilers. > > people need to stop calling it c++0x > > > > > >> If there is anything I can do to help, please let me know. > > > > > > Come here https://github.com/romainfrancois/cpp11_article where I'm writing > > an article on C++11 and what would be the benefits. > > > > Unless you are willing to do it yourself currently Rtools on Windows uses > g++ 4.6.3 and that requires that one specify -std=c++0x or -std=gnu++0x . > > Ubuntu 12.04 LTS also provides g++ 4.6.3. > > g++ 4.7 is the first version of g++ that accepts -std=c++11 or -std=gnu++11 > > More info at: > http://gcc.gnu.org/projects/cxx0x.html The R configure script is permissive and will enable "C++11" support if your compiler accepts -std=c++0x. Obviously you will only get partial support for the C++11 standard (But this is also true of some compilers that accept -std=c++11). You may be OK if you just want C99 features, which were missing from the C++98 standard, and features previously introduced in the TR1 extension. But there are no guarantees. Cross-platform support for C++11 is going to remain poor for some time to come, I'm afraid. Martyn --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] large integer values
On Wed, 2014-05-14 at 18:17 +0300, Adrian Dușa wrote: > On Wed, May 14, 2014 at 5:35 PM, Simon Urbanek > wrote: > > > [...] > > > > How do you print them? It seems like you're printing 32-bit value instead > > ... (powers of 2 are simply shifts of 1). > > > > > I am simply using Rprintf(): > > long long int power[lgth]; > power[lgth - 1] = 1; > Rprintf("power: %d", power[lgth - 1]); > for (j = 1; j < lgth; j++) { > power[lgth - j - 1] = 2*power[lgth - j]; > Rprintf(", %d", power[lgth - j - 1]); > } > > > Basically, I need them in reversed order (hence the inverse indexing), but > the values are nonetheless the same. > Adrian > > PS: also tried long long int, same result... Your numbers are being coerced to int when you print them. Try the format ", %lld" instead. Martyn __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check and DESCRIPTION file with Authors@R only
You need to run "R CMD build" on your package, then run "R CMD check" on the resulting tarball, as recommended in section 1.3.1 of the "Writing R Extensions" manual. The tarball will contain a version of the DESCRIPTION file with Author and Maintainer fields built from the Authors@R field. Martyn On Wed, 2014-06-11 at 06:34 -0500, Mathew McLean wrote: > Section 1.1 of R-exts mentions that the Maintainer and Author fields can be > omitted from the DESCRIPTION file if Authors@R is supplied. However, R CMD > check does not seem to like this. > > package.skeleton("foo") > desc <- readLines("foo/DESCRIPTION") > desc[6] <- "Authors@R: person('Mathew', 'McLean', email = 'n...@example.com', > role = c('aut', 'cre'))" > desc <- desc[-7] > writeLines(desc, "foo/DESCRIPTION") > system2("R", args = c("CMD", "check", "foo")) > * using log directory ‘/home/grad/mmclean/foo.Rcheck’ > * using R version 3.0.3 (2014-03-06) > * using platform: x86_64-unknown-linux-gnu (64-bit) > * using session charset: UTF-8 > * checking for file ‘foo/DESCRIPTION’ ... ERROR > Required fields missing or empty: > ‘Author’ ‘Maintainer’ > > sessionInfo() > R version 3.0.3 (2014-03-06) > Platform: x86_64-unknown-linux-gnu (64-bit) > > locale: > [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C > [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8 > [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8 > [7] LC_PAPER=en_US.UTF-8 LC_NAME=C > [9] LC_ADDRESS=C LC_TELEPHONE=C > [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > > This also happens for R Under development (unstable) (2014-04-10 r65396) > Platform: x86_64-w64-mingw32/x64 (64-bit) > > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Is it possible to make install.packages compile source code on Unix but use shipped binary on Windows?
On Wed, 2014-06-18 at 16:10 +0200, Renaud Gaujoux wrote: > > Maybe. Read the documentation and sources for yourself (see below). > > Not working, at least in my hands, as it requires `sh`. > > > Yes, *and documented* > > True. I overlooked the beginning of the NB point. > > > (including that it should not be used for Windows executables). > > Yes, that's why I use the suggested procedure that uses > src/install.libs.R to copy compiled .exe files into bin/. > > So, eventually, I guess the answer to the original question is: no, > one cannot make install.packages skip compilation of a source package, > only if on Windows, without having Rtools installed -- and in PATH. But why would you want to? I don't understand why you are making life so hard for yourself. It isn't hard to set up Rtools on Windows and you only need to do it once. Then you build a binary package on your development system to distribute to your users. Without even considering any technical details there is a purely strategic issue here. If a system has been set up that is robust and widely tested, like the R packaging system, you are much better off working with it than trying to subvert it. Martyn > Renaud > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check warning with S3 method
When you provide a method for a generic function imported from another package then the generic must be on the search path. Otherwise if a user types "filter" the dispatch to "filter.test" will never occur. What is happening here is worse because "filter" is a non-generic function in the stats package which is always on the search path. As you note, using "Depends: dplyr" works because it attaches dplyr to the search path before your test package is loaded. If you use "Imports" instead then you need to re-export the generic "filter" function from your package namespace. You will also need to document the generic function in your package. A minimal functioning help page that cross-references to the dplyr package should be sufficient (Note that S4 generics get a free pass here and do not need to be documented when re-exported, but S3 generics do). Martyn On Tue, 2014-06-17 at 14:21 -0500, Winston Chang wrote: > I'm getting an R CMD check warning with a package (call it package A) > that defines an S3 method but not the generic. The generic is defined > in another package (package B). Package A imports the S3 generic from > B. And there's one additional detail: the generic overrides a function > in the stats package. > > I've created a minimal test package which reproduces the problem: > https://github.com/wch/s3methodtest > > In this case: > - the package imports dplyr, for the dplyr::filter S3 generic > - the package defines a S3 method filter.test > - it imports dplyr, which defines a filter S3 generic > > The warning doesn't occur when package dplyr is in Depends instead of > Imports. It also doesn't occur if the method is for a generic that > does not override an existing function like stats::filter. For > example, if instead of filter.test, I define select.test > (dplyr::select is also an S3 generic), then there's no warning. > > This warning seems incorrect. Is this a bug? I'm interested in > submitting the package to CRAN soon, so any advice on what to do is > appreciated. > > Thanks, > -Winston > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check warning with S3 method
Export filter in the NAMESPACE file *without copying it* in the R source code. From: Winston Chang [winstoncha...@gmail.com] Sent: 19 June 2014 21:28 To: Martyn Plummer Cc: r-devel@r-project.org Subject: Re: [Rd] R CMD check warning with S3 method Oh, I forgot to mention I tried a few other variations: = Re-export dplyr::filter: filter <- dplyr::filter NAMESPACE has: S3method(filter,test) export(filter) importFrom(dplyr,filter) Then the filter.test method from s3methodtest doesn't seem to get registered properly: library(s3methodtest) filter(structure(list(), class = "test")) # Error in UseMethod("filter") : # no applicable method for 'filter' applied to an object of class "test" = Create a brand new filter generic: filter <- function (.data, ...) UseMethod("filter") NAMESPACE has: S3method(filter,test) export(filter) importFrom(dplyr,filter) Then loading dplyr will cause its dplyr::filter generic to block methods registered for: s3methodtest::filter: library(s3methodtest) filter(structure(list(), class = "test")) # OK library(dplyr) filter(structure(list(), class = "test")) # Error in UseMethod("filter") : # no applicable method for 'filter' applied to an object of class "test" So either way, it doesn't work. -Winston On Thu, Jun 19, 2014 at 12:15 PM, Martyn Plummer mailto:plumm...@iarc.fr>> wrote: When you provide a method for a generic function imported from another package then the generic must be on the search path. Otherwise if a user types "filter" the dispatch to "filter.test" will never occur. What is happening here is worse because "filter" is a non-generic function in the stats package which is always on the search path. As you note, using "Depends: dplyr" works because it attaches dplyr to the search path before your test package is loaded. If you use "Imports" instead then you need to re-export the generic "filter" function from your package namespace. You will also need to document the generic function in your package. A minimal functioning help page that cross-references to the dplyr package should be sufficient (Note that S4 generics get a free pass here and do not need to be documented when re-exported, but S3 generics do). Martyn On Tue, 2014-06-17 at 14:21 -0500, Winston Chang wrote: > I'm getting an R CMD check warning with a package (call it package A) > that defines an S3 method but not the generic. The generic is defined > in another package (package B). Package A imports the S3 generic from > B. And there's one additional detail: the generic overrides a function > in the stats package. > > I've created a minimal test package which reproduces the problem: > https://github.com/wch/s3methodtest > > In this case: > - the package imports dplyr, for the dplyr::filter S3 generic > - the package defines a S3 method filter.test > - it imports dplyr, which defines a filter S3 generic > > The warning doesn't occur when package dplyr is in Depends instead of > Imports. It also doesn't occur if the method is for a generic that > does not override an existing function like stats::filter. For > example, if instead of filter.test, I define select.test > (dplyr::select is also an S3 generic), then there's no warning. > > This warning seems incorrect. Is this a bug? I'm interested in > submitting the package to CRAN soon, so any advice on what to do is > appreciated. > > Thanks, > -Winston > > __ > R-devel@r-project.org<mailto:R-devel@r-project.org> mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:11}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check warning with S3 method
On Fri, 2014-06-20 at 01:34 -0500, Yihui Xie wrote: > but note that you will have to document it even if it is a function > imported from another package... Let's not go round in circles. I already pointed this out in my earlier reply to Winston. It has been snipped from the thread history so I'll repeat it here: "You will also need to document the generic function in your package. A minimal functioning help page that cross-references to the dplyr package should be sufficient" Duncan gives more details on the cross reference in his reply. > Perhaps help() should be intelligent > enough to link the documentation of `FUN` from package A for package B > when B imports `FUN` from A, and exports it in B's NAMESPACE. The > package name of A may be obtained from > environmentName(environment(FUN)). At the moment, since R CMD check > will warn against the missing documentation of `FUN` in B, I have to > copy FUN.Rd from A to B in this case, which seems to be inefficient > and not a best way to maintain. Did I miss anything in the R-exts > manual? > Regards, > Yihui > -- > Yihui Xie > Web: http://yihui.name > > > On Fri, Jun 20, 2014 at 12:19 AM, Winston Chang > wrote: > > On Thu, Jun 19, 2014 at 3:15 PM, Martyn Plummer wrote: > > > >> Export filter in the NAMESPACE file *without copying it* in the R source > >> code. > >> > >> > > Ah, it works! Thank you! > > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check warning with S3 method
Right. I believe that last time this topic came up (August last year) you said that your preferred solution was to export the method but not the generic. I think that's a good idea. The reason that it doesn't work with filter is that when you created the generic function filter in the dplyr package you overloaded the non-generic filter function in the stats package. Since stats is a default package that is always on the search path, this means you created an ambiguity that must always be resolved by making the generic function visible. In particular, the QC code is finding the non-generic filter function when it tries to check that the filter.test (or filter.ggvis) method is working correctly, and that is why you get the warnings. Martyn From: Hadley Wickham [h.wick...@gmail.com] Sent: 20 June 2014 11:33 To: Martyn Plummer Cc: winstoncha...@gmail.com; r-devel@r-project.org Subject: Re: [Rd] R CMD check warning with S3 method > When you provide a method for a generic function imported from another > package then the generic must be on the search path. Otherwise if a user > types "filter" the dispatch to "filter.test" will never occur. Right, and this is as desired. If dplyr is not explicitly loaded by the user, filter.ggvis will never be called. I don't understand why ggvis should be need to re-export filter from dplyr - the intent is that filter will be useable with ggvis object, but you'll only have the filter generic available if you've loaded dplyr. Hadley -- http://had.co.nz/ --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] blas test problem
I can reproduce this. It is a bug in reference BLAS. With the R 3.1.0 release, Fedora changed from using the internal BLAS that comes with R to using external BLAS. But reference BLAS does not handle missing values correctly. I expect this has been true since at least 2010, when Brian patched the R copy of BLAS, but the bug has only been revealed by the Fedora policy change. I am taking this over to R-SIG-Fedora where we can discuss the issue with Tom Callaway from Red Hat. Martyn On Fri, 2014-07-04 at 12:13 +0100, lejeczek wrote: > later I tried plain-vanilla, well.. redhats' and derivatives > default packages and they all fail: > > > ## PR#4582 %*% with NAs > > stopifnot(is.na(NA %*% 0), is.na(0 %*% NA)) > > ## depended on the BLAS in use. > > > > > > ## found from fallback test in slam 0.1-15 > > ## most likely indicates an inaedquate BLAS. > > x <- matrix(c(1, 0, NA, 1), 2, 2) > > y <- matrix(c(1, 0, 0, 2, 1, 0), 3, 2) > > (z <- tcrossprod(x, y)) > [,1] [,2] [,3] > [1,] NA NA0 > [2,]210 > > stopifnot(identical(z, x %*% t(y))) > Error: identical(z, x %*% t(y)) is not TRUE > Execution halted > > > I've tried scientificLinux, Centos, Oracle > all versions of R => 3.0 these linux distribution provide > hardware are AMD various CPU based platform > > > On 30/06/14 10:45, peter dalgaard wrote: > > It is not clear what you mean: > > > > The quoted page lists particular AMD BLAS versions that fail R's regression > > test. > > > > Other builds of R would run the regression test during building and you can > > run them yourself if you get the source code (for good measure, use the > > current version, not one from a 2011 web posting, i.e., fetch say > > https://svn.r-project.org/R/branches/R-3-1-branch/tests/reg-BLAS.R). > > > > E.g., for me > > > > Peters-iMac:R pd$ ../BUILD/bin/R --vanilla < tests/reg-BLAS.R > > ... normal output, no errors ... > > > > There is some risk that binary builds of R on one machine will fail on > > another. If this happens, it could be quite serious, so developers would > > want to know. However "most...seem to fail" is not enough to act upon. What > > exactly did you do, on which computing platform, and what happened that > > makes you believe that it had failed? > > > > -pd > > > > On 27 Jun 2014, at 13:38 , lejeczek wrote: > > > >> dear developers > >> > >> I myself am not a prog-devel, I found this > >> > >> http://devgurus.amd.com/message/1255852#1255852 > >> > >> Most R compilations/installations I use seem to fail this test, is this a > >> problem and if yes then how serious is it? > >> > >> regards > >> > >> __ > >> R-devel@r-project.org mailing list > >> https://stat.ethz.ch/mailman/listinfo/r-devel > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Replace isnan and lgamma in Fortran subroutine in R package
On Tue, 2014-09-23 at 07:43 +0200, Berend Hasselman wrote: > On 23-09-2014, at 00:33, Wang, Zhu wrote: > > > Hello, > > > > I submitted a package which used Fortran functions isnan and lgamma. > > However, I was told that: > > > > isnan and lgamma are not Fortran 95 functions. > > > > I was asked to write 'cross-platform portable code' and so should not be > > writing GNU extensions to Fortran. > > > > See http://cran.r-project.org/web/checks/check_results_mpath.html, which > > will shortly show installation failures under Solaris. > > > > I will appreciate advice on how to replace these two functions to avoid > > failure on some platforms. > > > > I don’t know about lgamma. > > Instead of isnan you could use Lapack’s logical function disnan to test for > NaN (it’s in lapack 3.4.2; I don’t know about earlier versions). > > Another way would be to write a C function using functions provided by R to > test for NaN. > That function should be declared with F77_NAME so that it can be called from > a Fortran routine. > > I haven’t tried this so I don’t know if this would be foolproof or is the > best way to do it.. > > Berend > As described by Berend (See also section 6.6 of Writing R Extensions) you can define Fortran-callable wrappers around ISNAN and lgammafn, both of which are provided with the C interface to R (lgammafn is declared in Rmath.h). Then declare these wrappers as external subroutines in your Fortran code and call them in place of isnan and lgamma. Martyn __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Replace isnan and lgamma in Fortran subroutine in R package
Try this patch. Martyn On Mon, 2014-09-22 at 22:33 +, Wang, Zhu wrote: > Hello, > > I submitted a package which used Fortran functions isnan and lgamma. However, > I was told that: > > isnan and lgamma are not Fortran 95 functions. > > I was asked to write 'cross-platform portable code' and so should not be > writing GNU extensions to Fortran. > > See http://cran.r-project.org/web/checks/check_results_mpath.html, which will > shortly show installation failures under Solaris. > > I will appreciate advice on how to replace these two functions to avoid > failure on some platforms. > > Thanks in advance, > > Zhu Wang > > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidential. If you are not the intended recipient of this message, please immediately notify the sender and delete it. Since its integrity cannot be guaranteed, its content cannot involve the sender's responsibility. Any misuse, any disclosure or publication of its content, either whole or partial, is prohibited, exception made of formally approved use --- diff -uNr mpath/src/cfuns.c mpath-fixed/src/cfuns.c --- mpath/src/cfuns.c 1970-01-01 01:00:00.0 +0100 +++ mpath-fixed/src/cfuns.c 2014-09-23 09:56:24.303347209 +0200 @@ -0,0 +1,5 @@ +#include +#include + +int F77_SUB(cisnan)(double *x) { return ISNAN(*x); } +double F77_SUB(rlgamma)(double *x) { return lgammafn(*x); } diff -uNr mpath/src/midloop.f mpath-fixed/src/midloop.f --- mpath/src/midloop.f 2014-09-22 19:32:25.0 +0200 +++ mpath-fixed/src/midloop.f 2014-09-23 09:56:55.455497773 +0200 @@ -255,6 +255,8 @@ integer n, family, i double precision dev, y(n), mu(n), theta, weights(n),tmp + integer cisnan + external cisnan dev = 0.0D0 do 10 i=1, n @@ -282,7 +284,7 @@ dev=dev+2*(weights(i)*(y(i)*dlog(max(1.0D0,y(i))/mu(i))- + (y(i)+theta)*dlog((y(i)+theta)/(mu(i) + theta endif - if(isnan(dev)) then + if(cisnan(dev).NE.0) then call intpr("i=", -1, i, 1) call dblepr("y(i)=", -1, y(i), 1) call dblepr("mu(i)=", -1, mu(i), 1) @@ -300,6 +302,8 @@ integer n, family, i, y0 double precision ll, y(n), mu(n), theta, w(n) + double precision rlgamma + external rlgamma ll = 0 do 10 i=1,n @@ -309,8 +313,8 @@ else y0=0 endif - ll=ll + w(i) * (lgamma(theta+y(i))-lgamma(theta)- - + lgamma(y(i)+1)+ + ll=ll + w(i) * (rlgamma(theta+y(i))-rlgamma(theta)- + + rlgamma(y(i)+1)+ + theta*log(theta) + y(i)*log(mu(i)+y0) - (theta + y(i))* + log(theta + mu(i))) else if(family .EQ. 1)then !gaussian @@ -320,7 +324,7 @@ ll=ll + w(i)*(y(i)*log(mu(i)/(1-mu(i)))+log(1-mu(i))) endif else if(family .EQ. 3) then !poisson -ll=ll + w(i)*(-mu(i) + y(i)*log(mu(i))-lgamma(y(i)+1)) +ll=ll + w(i)*(-mu(i) + y(i)*log(mu(i))-rlgamma(y(i)+1)) endif 10 continue return __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] "make check" fails on lapack.R and stats-Ex.R
On Thu, 2014-10-23 at 08:19 +, Pacey, Mike wrote: > As my attachment doesn't seem to have survived transit, I'm cut'n'pasting the > relevant failures here: > > Testing examples for package 'stats' > comparing 'stats-Ex.Rout' to 'stats-Ex.Rout.save' ... > 6466c6466 > < Grand Mean: 291.5937 > --- > > Grand Mean: 291.5938 I see the same thing, but it is not as bad as it looks. The actual value is 291.59375 so a small amount of numerical error can make the rounding to 4 decimal places go either way: > print(fit[[1]]$coefficients, digits=16) (Intercept) 291.593750002 Note that MKL sacrifices reproducibility (and hence precision) for speed. See more details here: https://software.intel.com/en-us/articles/run-reproducibility-with-intel-mkl-and-the-intel-compilers Martyn > 12881c12881 > < Murder -0.536 0.418 0.341 0.649 > --- > > Murder -0.536 0.418 -0.341 0.649 > 12882c12882 > < Assault -0.583 0.188 0.268 -0.743 > --- > > Assault -0.583 0.188 -0.268 -0.743 > 12883c12883 > < UrbanPop -0.278 -0.873 0.378 0.134 > --- > > UrbanPop -0.278 -0.873 -0.378 0.134 > 12884c12884 > < Rape -0.543 -0.167 -0.818 > --- > > Rape -0.543 -0.167 0.818 > 14628c14628 > < Grand Mean: 291.5937 > --- > > Grand Mean: 291.5938 > 15777c15777 > < Murder -0.54 0.42 0.34 0.65 > --- > > Murder -0.54 0.42 -0.34 0.65 > 15778c15778 > < Assault -0.58 0.27 -0.74 > --- > > Assault -0.58 -0.27 -0.74 > 15779c15779 > < UrbanPop -0.28 -0.87 0.38 > --- > > UrbanPop -0.28 -0.87 -0.38 > 15780c15780 > < Rape -0.54 -0.82 > --- > > Rape -0.54 0.82 > > > running code in 'lapack.R' ... OK > comparing 'lapack.Rout' to './lapack.Rout.save' ...23,31c23,31 > < [1,] -0.7245 -0.6266 -0.27350 0.08527 -0.02074 -0.004025 > < [2,] -0.4282 0.1299 0.64294 -0.55047 0.27253 0.092816 > < [3,] -0.3122 0.2804 0.33633 0.31418 -0.61632 -0.440904 > < [4,] -0.2479 0.3142 0.06931 0.44667 -0.02945 0.530120 > < [5,] -0.2064 0.3141 -0.10786 0.30242 0.35567 0.237038 > < [6,] -0.1771 0.3027 -0.22106 0.09042 0.38879 -0.260449 > < [7,] -0.1553 0.2877 -0.29281 -0.11551 0.19286 -0.420945 > < [8,] -0.1384 0.2722 -0.33784 -0.29313 -0.11633 -0.160790 > < [9,] -0.1249 0.2571 -0.36543 -0.43885 -0.46497 0.434600 > --- > > [1,] -0.7245 0.6266 0.27350 -0.08527 0.02074 -0.004025 > > [2,] -0.4282 -0.1299 -0.64294 0.55047 -0.27253 0.092816 > > [3,] -0.3122 -0.2804 -0.33633 -0.31418 0.61632 -0.440904 > > [4,] -0.2479 -0.3142 -0.06931 -0.44667 0.02945 0.530120 > > [5,] -0.2064 -0.3141 0.10786 -0.30242 -0.35567 0.237038 > > [6,] -0.1771 -0.3027 0.22106 -0.09042 -0.38879 -0.260449 > > [7,] -0.1553 -0.2877 0.29281 0.11551 -0.19286 -0.420945 > > [8,] -0.1384 -0.2722 0.33784 0.29313 0.11633 -0.160790 > > [9,] -0.1249 -0.2571 0.36543 0.43885 0.46497 0.434600 > 35,40c35,40 > < [1,] -0.7365 -0.6225 -0.2550 0.06976 -0.01328 -0.001588 > < [2,] -0.4433 0.1819 0.6867 -0.50860 0.19627 0.041117 > < [3,] -0.3275 0.3509 0.2611 0.50474 -0.61606 -0.259216 > < [4,] -0.2626 0.3922 -0.1044 0.43748 0.40834 0.638902 > < [5,] -0.2204 0.3946 -0.3510 -0.01612 0.46428 -0.675827 > < [6,] -0.1904 0.3832 -0.5111 -0.53856 -0.44664 0.257249 > --- > > [1,] -0.7365 0.6225 0.2550 -0.06976 0.01328 -0.001588 > > [2,] -0.4433 -0.1819 -0.6867 0.50860 -0.19627 0.041117 > > [3,] -0.3275 -0.3509 -0.2611 -0.50474 0.61606 -0.259216 > > [4,] -0.2626 -0.3922 0.1044 -0.43748 -0.40834 0.638902 > > [5,] -0.2204 -0.3946 0.3510 0.01612 -0.46428 -0.675827 > > [6,] -0.1904 -0.3832 0.5111 0.53856 0.44664 0.257249 > > > -Original Message- > From: r-devel-boun...@r-project.org [mailto:r-devel-boun...@r-project.org] On > Behalf Of Pacey, Mike > Sent: 22 October 2014 17:02 > To: r-devel@r-project.org > Subject: [Rd] "make check" fails on lapack.R and stats-Ex.R > > Hi folks, > > I suspect this is a request for a sanity check than a bug report: > > I've been successfully compiling an optimised version of R for several years > using the Intel compiler and MKL. I've just test-run the new Intel 15.0 > compiler suite, and I'm seeing a few numeric failures that I don't see using > the same build method with Intel 13.0. I've attached the output of "make > check". Build details are below. > > The most notable failures are in lapack.R, though I see from the comments in > the output that different lapack and blas libraries may produce different > signs for some outputs which can be safely ignored. My linear algebra's a bit > rusty, so I'd like a sanity check: can all the sign differences be safely > ignored in the attached output? (And a possible RFC: at least for the > purposes of make check, can the scripts output abs() values for all cases > where sign isn't an issue?) > > The other failures are in stats-Ex.R. It looks like most of the problem lines > are outputs from a PCA-like functi
Re: [Rd] 'library' or 'require' call not declared from: 'rgl'
On Thu, 2014-10-30 at 17:18 -0400, Michael Friendly wrote: > On 10/30/2014 4:19 PM, Simon Urbanek wrote: > > Did you intend rgl to be optional? If so, then you should use > Suggests: instead. When you use Imports: it will load rgl > automatically so require() does't make sense (since it will be always > TRUE). > > > I always had it as Suggests: rgl before. But R-devel now gave be all > those "no visible global function definition for ..." > messages. > > Achim suggested using explicitly rgl:: everywhere. That's quite ugly, > but seems to work. I think you do want "Depends" rather than "Suggests" here. "Suggests" is for when the other package does not need to be loaded for the user to use your package, but the other package might be used in an example or vignette. In your package, the default method for a generic function that your package defines calls functions from rgl. To me that means rgl should be in "Depends", and the required functions from rgl should be imported in the NAMESPACE file. Martyn --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] Calculation of cross-correlation in ccf
The acf and ccf functions assume that time series are stationary, but yours are not. I think that your alternative function is not well founded. You take a separate mean for each sub-series, which implicitly allows the mean of the series to vary arbitrarily with time. However, you only have one instance of each time series so you can't have a non-parametric model for the means. A parametric approach is to remove a linear trend from the series and then apply ccf: e1 <- residuals(lm(y1 ~ x)) e2 <- residuals(lm(y1 ~ x)) ccf(e1, e2) which does identify a lag of -8 as the best match, although the correlation is somewhat lower than what you found (0.87). Martyn From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on behalf of Sami Toppinen [sami.toppi...@kolumbus.fi] Sent: 04 November 2014 09:44 To: r-devel@r-project.org Subject: [Rd] [R] Calculation of cross-correlation in ccf Dear All, I am studying some process measurement time series in R and trying to identify time delays using cross-correlation function ccf. The results have however been bit confusing. I found a couple of years old message about this issue but unfortunately wasn't able to find it again for a reference. For example, an obvious time shift is observed between the measurements y1 and y2 when the following test data is plotted: x <- 1:121 y1 <- c(328.27, 328.27, 328.27, 328.27, 328.21, 328.14, 328.14, 328.01, 328.07, 328.01, 327.87, 328.01, 328.07, 328.27, 328.27, 328.54, 328.61, 328.74, 328.88, 329.01, 329.01, 329.21, 329.28, 329.35, 329.35, 329.42, 329.35, 329.28, 329.28, 329.15, 329.08, 329.08, 328.95, 328.95, 328.95, 328.95, 329.01, 329.15, 329.21, 329.28, 329.55, 329.62, 329.75, 329.82, 329.89, 330.09, 330.09, 330.29, 330.29, 330.36, 330.42, 330.29, 330.15, 330.22, 330.09, 329.95, 329.82, 329.75, 329.62, 329.55, 329.48, 329.55, 329.68, 329.75, 329.82, 329.89, 330.09, 330.09, 330.15, 330.22, 330.42, 330.42, 330.42, 330.36, 330.42, 330.22, 330.15, 330.09, 329.89, 329.75, 329.55, 329.35, 329.35, 329.42, 329.48, 329.55, 329.75, 329.75, 329.82, 330.09, 330.15, 330.42, 330.42, 330.62, 330.69, 330.69, 330.83, 330.83, 330.76, 330.62, 330.62, 330.56, 330.42, 330.42, 330.29, 330.29, 330.29, 330.29, 330.22, 330.49, 330.56, 330.62, 330.76, 331.03, 330.96, 331.16, 331.23, 331.50, 331.63, 332.03, 332.03) y2 <- c(329.68, 329.75, 329.82, 329.95, 330.02, 330.15, 330.22, 330.36, 330.22, 330.29, 330.29, 330.29, 330.29, 330.15, 330.22, 330.22, 330.15, 330.15, 330.15, 330.15, 330.15, 330.29, 330.49, 330.49, 330.62, 330.89, 331.03, 331.09, 331.16, 331.30, 331.30, 331.36, 331.43, 331.43, 331.43, 331.36, 331.36, 331.36, 331.36, 331.23, 331.23, 331.16, 331.16, 331.23, 331.30, 331.23, 331.36, 331.56, 331.70, 331.83, 331.97, 331.97, 332.10, 332.30, 332.44, 332.44, 332.51, 332.51, 332.57, 332.57, 332.51, 332.37, 332.24, 332.24, 332.10, 331.97, 331.97, 331.90, 331.83, 331.97, 331.97, 331.97, 332.03, 332.24, 332.30, 332.30, 332.37, 332.57, 332.57, 332.57, 332.57, 332.57, 332.51, 332.37, 332.30, 332.17, 331.97, 331.83, 331.70, 331.70, 331.63, 331.63, 331.70, 331.83, 331.90, 332.10, 332.24, 332.30, 332.44, 332.57, 332.71, 332.84, 332.77, 332.91, 332.84, 332.84, 332.91, 332.84, 332.77, 332.77, 332.64, 332.64, 332.57, 332.57, 332.57, 332.57, 332.57, 332.71, 332.98, 333.24, 333.58) matplot(cbind(y1, y2)) However, the cross-correlation function surprisingly gives the maximum correlation 0.83 at zero lag: ccf(y1, y2) Plotting of variables against each other with zero lag plot(y1, y2) shows that the correlation is not that good. Instead, a very nice correlation is obtained with a plot with shifted variables: plot(y1[1:113], y2[1:113 + 8]) As a comparison I defined my own cross correlation function: cross.cor <- function(x, y, k) { n <- length(x) - abs(k) if (k >= 0) cor(x[1:n + k], y[1:n]) else cor(x[1:n], y[1:n - k]) } The new function cross.cor gives maximum correlation of 0.99 at lag -8, and the shape of the correlation function is very different from the one obtained with ccf (both methods give same value at zero lag): plot(-15:15, sapply(-15:15, function(lag) cross.cor(y1, y2, lag)), ylim = c(0.3, 1)) points(-15:15, ccf(y1, y2, 15, plot = FALSE)$acf, col = "red") In order to understand the reason for the differences, I looked at the program codes of ccf function. When the variables are compared with a nonzero lag, some the data points must be left out from the tails of the time series. It appears to me that ccf calculates (partly in R code, partly in C code) first the means and the standard deviations using the whole data, and then uses those values as constants in further calculations. The time series are truncated only in the summations for covariances. That approach naturally speeds up the computations, but is it really correct? Is the approach used in ccf somehow statistically more correct? I suppose th
Re: [Rd] Cursor not behaving properly
I can't reproduce this on Fedora 20, so I think it is an Ubuntu bug. If anyone not on Ubuntu can reproduce this then please add a comment in the bug repository. https://bugs.r-project.org/bugzilla/show_bug.cgi?id=16077 If not then I'll close it. Martyn On Thu, 2014-11-20 at 10:27 +0100, Larissa Hauer wrote: > Hi, > > regarding earlier versions: I can reproduce this bug with R 3.1.1 on > Ubuntu 14.04, Gnome Terminal. > > I could imagine this is an Ubuntu problem since I've had "ghost mice" > and strange cursor behavior since I switched to 14.04. > > Larissa > > PS: If any Ubuntu user is interested, I was able to fix the problems > (except for the R bug of course) by turning off the second screen as > described here: > > http://askubuntu.com/questions/365798/mouse-arrow-flash-and-ghost-in-ubuntu-13-10 > > On 19.11.2014 23:20, Henrik Bengtsson wrote: > > FYI, it might be useful to check if the bug also appears on R-devel as > > well as on earlier versions of R. That might narrow down whether it > > was introduced in a particular R version or not, which in turn would > > be useful to whoever might try to tackle this problem. It might not > > even be an R problem in the end. > > > > /Henrik > > > > On Wed, Nov 19, 2014 at 1:14 PM, Scott Kostyshak > > wrote: > >> On Tue, Nov 18, 2014 at 9:50 PM, Scott Kostyshak > >> wrote: > >>> On Mon, Nov 10, 2014 at 10:52 AM, Kaiyin Zhong (Victor Chung) > >>> wrote: > I found a strange bug in R recently (version 3.1.2): > > As you can see from the screenshots attached, when the cursor passes the > right edge of the console, instead of start on a new line, it goes back > to > the beginning of the same line, and overwrites everything after it. > > This happens every time the size of the terminal is changed, for example, > if you fit the terminal to the right half of the screen, start an R > session, exec some commands, maximize the terminal, and type a long > command > into the session, then you will find the bug reproduced. > > I am on Ubuntu 14.04, and I have tested this in konsole, guake and > gnome-terminal. > >>> > >>> I can reproduce this, also on Ubuntu 14.04, with gnome-terminal and > >>> xterm. If you don't get any response here, please file a bug report at > >>> bugs.r-project.org. > >> > >> For archival purposes, the OP reported the bug here: > >> https://bugs.r-project.org/bugzilla/show_bug.cgi?id=16077 > >> > >> Scott > >> > >> > >> -- > >> Scott Kostyshak > >> Economics PhD Candidate > >> Princeton University > >> > >> __ > >> R-devel@r-project.org mailing list > >> https://stat.ethz.ch/mailman/listinfo/r-devel > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Fwd: No source view when using gdb
On Thu, 2014-12-11 at 14:00 +0100, Pierrick Bruneau wrote: > Dear R contributors, > > Say I want to debug some C code invoked through .Call() - say > "varbayes" in the VBmix package. following the instructions in > "Writing R Extensions", I perform the following actions : > > R -d gdb > run > library(VBmix) > CTRL+C > break varbayes > signal 0 > mod <- varbayes(as.matrix(iris)[,1:4], 2) > > The breakpoint is indeed activated, seemingly at the correct position > in the source file, but instead of the actual text at the respective > line, I get the following : > > 69varbayes.c: No such file or directory. > > Issuing "next" afterwards seems to attain the expected purpose (step > by step progression), but source code lines are replaced by, e.g. : > > 72in varbayes.c > > There should be some way of "installing" the source code files, but I > did not find R-specific info there. Does someone have a clue for my > problem? This happens when you install a package from the tarball. The source is unpacked into a temporary directory which is then deleted. Try unpacking the source tarball yourself, and then installing from the unpacked directory, e.g. tar xfvz VBMix_0.2.17.tar.gz R CMD INSTALL VBMix Martyn > Thanks by advance, > Pierrick > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Request for help with UBSAN and total absense of CRAN response
On Tue, 2015-01-13 at 10:34 -0600, Dirk Eddelbuettel wrote: > On 13 January 2015 at 08:21, Dan Tenenbaum wrote: > | Where should the package source be downloaded from? I see it in CRAN (but > presumably the latest version that causes the issue is not yet downloadable) > and in github. > > The "presumable" assumption is incorrect AFAIK. > > The error should presumably came up in both versions as annoylib.h did not > change there. Feel free to prove me wrong :) and just use whatever is > easiest. > > Dirk This is a curious case. Here is where the first error occurs: Executing test function test01getNNsByVector ... Breakpoint 1, 0x009c0440 in __ubsan_handle_out_of_bounds () (gdb) frame 1 #1 0x7fffe777935b in AnnoyIndex >::_get_all_nns (this=0x3a7e8f0, v=0x37d95d0, n=3, result=0x7ffee1e8) at ../inst/include/annoylib.h:532 532 nns.insert(nns.end(), nd->children, &(nd->children[nd->n_descendants])); (gdb) p nd->children $48 = {0, 1} (gdb) p nd->n_descendants $49 = 3 (gdb) p nns $50 = std::vector of length 0, capacity 0 So we are trying to insert 3 values from an array of length 2 into an STL vector. Comments in the header file annoylib.h (lines 114-130) show that this is a result of a "memory optimization". Small objects have a completely different format but are allocated in the same memory. When the optimization is used the array is deliberately allowed to overflow: S children[2]; // Will possibly store more than 2 T v[1]; // We let this one overflow intentionally A workaround is to turn off the optimization by reducing the threshold for the alternate data format (_K) to such a low level that it is never used (annoylib.h, line 259): //_K = (sizeof(T) * f + sizeof(S) * 2) / sizeof(S); _K = 2; //Turn off memory optimization I think this is a case of "being too clever by half". Martyn --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check: Uses the superseded package: ‘doSNOW’
The CRAN package snow is superseded by the parallel package which is distributed with R since version 2.14.0. Here are the release notes \item There is a new package \pkg{parallel}. It incorporates (slightly revised) copies of packages \CRANpkg{multicore} and \CRANpkg{snow} (excluding MPI, PVM and NWS clusters). Code written to use the higher-level API functions in those packages should work unchanged (apart from changing any references to their namespaces to a reference to \pkg{parallel}, and links explicitly to \CRANpkg{multicore} or \CRANpkg{snow} on help pages). So you should replace your dependency on doSNOW with doParallel, which is the equivalent foreach adapter for the parallel package. Martyn On Mon, 2015-02-09 at 23:08 +0100, Xavier Robin wrote: > Dear list, > > When I run an R CMD check --as-cran on my package (pROC) I get the > following note: > > Uses the superseded package: ‘doSNOW’ > The fact that it uses the doSNOW package is correct as I have the > following example in an .Rd file: > > #ifdef windows > > if (require(doSNOW)) { > > registerDoSNOW(cl <- makeCluster(2, type = "SOCK")) > > ci(roc2, method="bootstrap", parallel=TRUE) > > \dontrun{ci(roc2, method="bootstrap", parallel=TRUE)} > > \dontshow{ci(roc2, method="bootstrap", parallel=TRUE, boot.n=20)} > > stopCluster(cl) > > } > > #endif > > #ifdef unix > > if (require(doMC)) { > > registerDoMC(2) > > \dontrun{ci(roc2, method="bootstrap", parallel=TRUE)} > > \dontshow{ci(roc2, method="bootstrap", parallel=TRUE, boot.n=20)} > > } > > #endif > > The "superseded" part is more confusing to me, though. The doSNOW > package seems to be still available on CRAN with no special notice, > listed in the HighPerformanceComputing view likewise, and under active > development (last change a couple of days ago on R-Forge). I could find > no mention of what it has been superseded with. Surprisingly, Google was > no help on this. > > I could see the note is triggered in QC.R file of the tools package. > However this finding is not much help and leaves me just as confused as > before. > > I recall spending quite some time to setup this example to run both > under Windows and Unix. doSNOW was the only way I could get it to work > there. doMC is apparently still available for Unix only. I couldn't get > doRNG to work on either platforms. > > So what is R CMD check noticing me about? > Should I ignore the notice, or take an action? If so, which one? > > Best wishes, > Xavier > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Using Fortran 90 code in packages for CRAN
Everything you need to know is in the Writing R Extensions manual, and section 1.2.3 in particular. There are restrictions on Fortran 90/95 use due to portability issues. Make sure you are following all of the advice in the manual, e.g.: - Files containing Fortran 90 code should have extension .f90 - Mixed Fortran 9x and C++ code is not supported and there is no guarantee that Fortran 9x can be mixed with other languages. - Free source form Fortran 9x is not portable. - When using modules, you may need to give compile-order hints to parallel make. - Do not include module files in the source - they are compiler-dependent. Martyn On Tue, 2015-03-17 at 11:26 +0100, Lukas Lehnert wrote: > I recently submitted a package to CRAN which encompasses Fortran 90 code. > Neither on my linux system nor on the win-builder system the compilation > reported any error or warning. The function worked fine. However, after > submission of the package to CRAN, I received an email that I should not > include Fortran 90 code. > > The problem is that the part written in Fortran is a large function using > modules. Thus, rewriting the function in FORTRAN 77 is impossible. So, I > searched on CRAN and found some packages which contain Fortran 90 code. Is it > generally possible to submit R packages to CRAN containing Fortran 90 source > code? If so, what specific things should I consider? > > Thank you for your help > > Lukas Lehnert > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [Rcpp-devel] Windows gcc toolchain for R 3.2.0
On Wed, 2015-03-18 at 07:55 -0700, Dan Tenenbaum wrote: > Is it not considered a "known problem" that C++ libraries linked > against by R packages need to be rebuilt with g++ 4.9.2 in order for > the R packages to install/load? This could well be due to incompatible thread models (win32 vs posix). See the thread "V8 crashes..." on the Rcpp-devel mailing list. We have not yet had a chance to test the gcc 4.9.2 toolchain built with win32 threads. Martyn > Thanks again, > Dan --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] vignette checking woes
On Wed, 2015-03-25 at 15:12 -0500, Roger Koenker wrote: > Thierry, > > I have this: > > if (require(MatrixModels) && require(Matrix)) { > X <- model.Matrix(Terms, m, contrasts, sparse = TRUE) You have this in the current release, which does not show this problem in the CRAN tests. This, and the fact that you can build the vignette manually, suggests that there is a problem with your checking environment. Did you set up a special checking environment in ~/.R/check.Renviron ? Does it set R_LIBS? Martyn > in my function rqss() I've tried variants of requireNamespace too without > success. > If I understand properly model.Matrix is from MatrixModels but it calls > sparse.model.matrix which is part of Matrix, and it is the latter function > that I'm > not finding. Maybe I should go back to the requireNamespace strategy again? > > Roger > > url:www.econ.uiuc.edu/~rogerRoger Koenker > emailrkoen...@uiuc.eduDepartment of Economics > vox: 217-333-4558University of Illinois > fax: 217-244-6678Urbana, IL 61801 > > > On Mar 25, 2015, at 2:54 PM, Thierry Onkelinx > > wrote: > > > > Dear Roger, > > > > How is Matrix loaded? > > > > If you use sparse.model.matrix() inside a function from your package you > > need to declare it as Matrix::sparse.model.matrix() > > > > Best regards, > > > > ir. Thierry Onkelinx > > Instituut voor natuur- en bosonderzoek / Research Institute for Nature and > > Forest > > team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance > > Kliniekstraat 25 > > 1070 Anderlecht > > Belgium > > > > To call in the statistician after the experiment is done may be no more > > than asking him to perform a post-mortem examination: he may be able to say > > what the experiment died of. ~ Sir Ronald Aylmer Fisher > > The plural of anecdote is not data. ~ Roger Brinner > > The combination of some data and an aching desire for an answer does not > > ensure that a reasonable answer can be extracted from a given body of data. > > ~ John Tukey > > > > 2015-03-25 19:59 GMT+01:00 Roger Koenker : > > I'm having trouble with R CMD check of my quantreg package. All is well > > until I get to: > > > > checking running R code from vignettes ... > > ‘rq.Rnw’ ... failed > > ERROR > > Errors in running code in vignettes: > > when running code in ‘rq.Rnw’ > > > > when I see a snippet from the vignette code and then: > > > > Loading required namespace: MatrixModels > > > > When sourcing ‘rq.R’: > > Error: could not find function "sparse.model.matrix" > > Execution halted > > > > This is baffling to me since sparse.model.matrix is in the > > namespace of Matrix and it should be loaded at this stage > > since it is required by MatrixModels which has just been > > pronounced "loaded". > > > > I've verified that I can Sweave("rq.Rnw") > > and texi2pdf("rq.tex", clean=TRUE) without any problem. > > > > Any hints greatly appreciated, as always. > > > > Roger > > > > > > url:www.econ.uiuc.edu/~rogerRoger Koenker > > emailrkoen...@uiuc.eduDepartment of Economics > > vox: 217-333-4558University of Illinois > > fax: 217-244-6678Urbana, IL 61801 > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] alternate licensing for package data?
I think this is covered well by the CRAN repository policy: http://cran.r-project.org/web/packages/policies.html The two key license requirements are that: 1) CRAN must have a perpetual license to distribute the package 2) The package license should be listed here: https://svn.r-project.org/R/trunk/share/licenses/license.db Packages with licenses not included in that list are generally not accepted. However, there are exceptions, and you can find some by searching for "non-commercial site:cran.r-project.org" on Google. See also section 1.1.2 of the Writing R Extensions manual for an explanation. Personally, I would not want to add the extra complexity to a package that is otherwise GPL. Martyn On Tue, 2015-04-21 at 19:23 -0400, Ben Bolker wrote: > Does anyone have speculations about the implications of the GPL for > data included in a package, or more generally for restricting use of data? > > The specific use case is that I have a package which is otherwise > GPL (version unspecified at present). There are various data sets > included, but they are all essentially in the public domain. I'm > thinking about including another data set, but the original author of > that data might like to impose some reasonable restrictions (e.g. > please don't use in an academic publication without explicit > permission ...) Would such rules be expected to be compatible with > CRAN rules? Will having the package be "GPL except for file XXX, see > LICENSE" mess things up horribly? > > I can of course make the data available for download and include a > link, and/or make a special package that contains only these data, but > it would seem to be more convenient for end users, and more > future-proof, to put everything in one place. > > I know I will eventually need to take this up with CRAN, but I'm > looking for reasonably informed opinions/suggestions ... > > cheers > Ben Bolker > > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] shlib problems with Intel compiler
On Tue, 2015-04-21 at 11:46 -0600, Andy Jacobson (NOAA Affiliate) wrote: > Hi, > > I'm encountering trouble compiling caTools_1.17.1.tar.gz and > e1071_1.6-4.tar.gz on a Linux system using the Intel compiler suite. > 14 other packages I generally use installed without any trouble. I > notice both of these trouble packages have a C++ component, so I > wonder if that might be the issue. Below, there's information on my > platform, compiler, and some diagnostic output showing the errors. > > Advice appreciated! > > Thanks, > > Andy There are two things missing when R tries to create the shared object file on this line: icpc -L/usr/local/lib64 -o e1071.so Rsvm.o cmeans.o cshell.o floyd.o svm.o Firstly, the compiler flag "-shared" is missing. It tells the compiler to build a shared object instead of an executable. Secondly the linker flag "-lR" is missing, along with the "-L" flag that tells the linker where to find the shared R library. To find out what went wrong, you should share the configuration you used when building R. Martyn > > > Intel compiler suite: icc (ICC) 14.0.2 20140120 > > sessionInfo() reports: > > R version 3.1.3 (2015-03-09) > Platform: x86_64-unknown-linux-gnu (64-bit) > Running under: Red Hat Enterprise Linux Server release 6.5 (Santiago) > > locale: > [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C > [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8 > [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8 > [7] LC_PAPER=en_US.UTF-8 LC_NAME=C > [9] LC_ADDRESS=C LC_TELEPHONE=C > [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > loaded via a namespace (and not attached): > [1] tools_3.1.3 > > > Abbreviated versions of the output from R CMD INSTALL: > > For caTools: > > * installing to library ‘/scratch3/BMC/co2/lib/R-3.1/x86_64-unknown-linux-gnu’ > * installing *source* package ‘caTools’ ... > ** package ‘caTools’ successfully unpacked and MD5 sums checked > ** libs > icpc -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG -I/usr/local/include > -fpic -g -O3 -fp-model precise -c Gif2R.cpp -o Gif2R.o > icpc -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG -I/usr/local/include > -fpic -g -O3 -fp-model precise -c GifTools.cpp -o GifTools.o > icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG > -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model precise -c > runfunc.c -o runfunc.o > icpc -L/usr/local/lib64 -o caTools.so Gif2R.o GifTools.o runfunc.o > /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In function > `_start': > (.text+0x20): undefined reference to `main' > Gif2R.o: In function `imwritegif': > /tmp/RtmpEfnBsm/R.INSTALL17c3a7327d4fb/caTools/src/Gif2R.cpp:19: undefined > reference to `R_chk_calloc' > /tmp/RtmpEfnBsm/R.INSTALL17c3a7327d4fb/caTools/src/Gif2R.cpp:23: undefined > reference to `R_chk_free' > ... (many more R_* undefined references) > > For e1071: > > * installing to library ‘/scratch3/BMC/co2/lib/R-3.1/x86_64-unknown-linux-gnu’ > * installing *source* package ‘e1071’ ... > ** package ‘e1071’ successfully unpacked and MD5 sums checked > checking for C++ compiler default output file name... a.out > checking whether the C++ compiler works... yes > checking whether we are cross compiling... no > checking for suffix of executables... > checking for suffix of object files... o > checking whether we are using the GNU C++ compiler... yes > checking whether icpc accepts -g... yes > ** libs > icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG > -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model precise -c Rsvm.c > -o Rsvm.o > icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG > -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model precise -c > cmeans.c -o cmeans.o > icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG > -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model precise -c > cshell.c -o cshell.o > icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG > -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model precise -c > floyd.c -o floyd.o > icpc -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG -I/usr/local/include > -fpic -g -O3 -fp-model precise -c svm.cpp -o svm.o > icpc -L/usr/local/lib64 -o e1071.so Rsvm.o cmeans.o cshell.o floyd.o svm.o > /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/crt1.o: In function > `_start': > (.text+0x20): undefined reference to `main' > Rsvm.o: In function `do_cross_validation': > /tmp/Rtmp9h7iYE/R.INSTALL1d9615a42180e/e1071/src/Rsvm.c:91: undefined > reference to `GetRNGstate' > /tmp/Rtmp9h7iYE/R.INSTALL1d9615a42180e/e1071/src/Rsvm.c:94: undefined > reference to `unif_rand' > /tmp/Rtmp9h7iYE/R.INSTALL1d9615a42180e/e1071/src/Rsvm.c:106: undefined > reference to `PutRNGstate' > ... (man
Re: [Rd] shlib problems with Intel compiler
I was assuming that R was configured with --enable-R-shlib but if that's not the case then you don't need it. Martyn > On 22 Apr 2015, at 18:40, Andy Jacobson (NOAA Affiliate) > wrote: > > Hi Martyn, > > Thanks for your insight, that seems pretty direct. Unfortunately, I did not > compile this version of R (it's on a large supercomputer system and this > version of R was installed by the admins). Using "R CMD config", I see the > following relevant settings: > > DYLIB_LD = icc -std=gnu99 > DYLIB_LDFLAGS = -shared -openmp > LDFLAGS = -L/opt/compilers/intel/cce/9.1.039/lib > -L/opt/compilers/intel/fce/9.1.033/lib -L/usr/local/lib64 > SHLIB_CXXLD = icpc > SHLIB_CXXLDFLAGS = > SHLIB_LD = icc -std=gnu99 > SHLIB_LDFLAGS = -shared > > It looks like the SHLIB_CXXLDFLAGS is missing the "-shared -lR > -L". It's a mystery to me how R was built and configured such > that it has incomplete/incorrect flags. > > By trial and error I figured out how to use a .R/Makevars setting to add the > required flags to SHLIB_CXXLDFLAGS. (It sure would have been useful to have > a reference about the syntax and variable names that the Makevars file can > contain...is that documented somewhere?) > > I wonder if the recommendation for "-lR" is correct. None of the other > packages are compiled with that flag, and everything seems to compile and > load OK in R without using that. > > Best Regards, > > Andy > >> On Apr 22, 2015, at 9:30 AM, Martyn Plummer wrote: >> >> On Tue, 2015-04-21 at 11:46 -0600, Andy Jacobson (NOAA Affiliate) wrote: >>> Hi, >>> >>> I'm encountering trouble compiling caTools_1.17.1.tar.gz and >>> e1071_1.6-4.tar.gz on a Linux system using the Intel compiler suite. >>> 14 other packages I generally use installed without any trouble. I >>> notice both of these trouble packages have a C++ component, so I >>> wonder if that might be the issue. Below, there's information on my >>> platform, compiler, and some diagnostic output showing the errors. >>> >>> Advice appreciated! >>> >>> Thanks, >>> >>> Andy >> >> There are two things missing when R tries to create the shared object >> file on this line: >> >> icpc -L/usr/local/lib64 -o e1071.so Rsvm.o cmeans.o cshell.o floyd.o >> svm.o >> >> Firstly, the compiler flag "-shared" is missing. It tells the compiler >> to build a shared object instead of an executable. Secondly the linker >> flag "-lR" is missing, along with the "-L" flag that tells the linker >> where to find the shared R library. >> >> To find out what went wrong, you should share the configuration you used >> when building R. >> >> Martyn >> >>> >>> >>> Intel compiler suite: icc (ICC) 14.0.2 20140120 >>> >>> sessionInfo() reports: >>> >>> R version 3.1.3 (2015-03-09) >>> Platform: x86_64-unknown-linux-gnu (64-bit) >>> Running under: Red Hat Enterprise Linux Server release 6.5 (Santiago) >>> >>> locale: >>> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C >>> [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8 >>> [5] LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8 >>> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C >>> [9] LC_ADDRESS=C LC_TELEPHONE=C >>> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C >>> >>> attached base packages: >>> [1] stats graphics grDevices utils datasets methods base >>> >>> loaded via a namespace (and not attached): >>> [1] tools_3.1.3 >>> >>> >>> Abbreviated versions of the output from R CMD INSTALL: >>> >>> For caTools: >>> >>> * installing to library >>> ‘/scratch3/BMC/co2/lib/R-3.1/x86_64-unknown-linux-gnu’ >>> * installing *source* package ‘caTools’ ... >>> ** package ‘caTools’ successfully unpacked and MD5 sums checked >>> ** libs >>> icpc -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG -I/usr/local/include >>> -fpic -g -O3 -fp-model precise -c Gif2R.cpp -o Gif2R.o >>> icpc -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG -I/usr/local/include >>> -fpic -g -O3 -fp-model precise -c GifTools.cpp -o GifTools.o >>> icc -std=gnu99 -I/apps/R/3.1.3-intel/lib64/R/include -DNDEBUG >>> -I/usr/local/include-fpic -g -O3 -wd188 -ip -fp-model
Re: [Rd] example fails during R CMD CHECK but works interactively?
The error can be reproduced by running the bigmemory-Ex.R script which you can find in the bigmemory.Rcheck directory, either in batch mode or via source() in an interactive session. It seems that you have underlying memory allocation problems. I can get the script to running by adding gc() calls when necessary (i.e. when a failure is reported, add gc() just before this point in the script and rerun). Martyn On Fri, 2015-05-15 at 07:05 -0500, Charles Determan wrote: > Does anyone else have any thoughts about troubleshooting the R CMD check > environment? > > Charles > > On Wed, May 13, 2015 at 1:57 PM, Charles Determan > wrote: > > > Thank you Dan but it isn't my tests that are failing (all of them pass > > without problem) but one of the examples from the inst/examples directory. > > I did try, however, to start R with the environmental variables as you > > suggest but it had no effect on my tests. > > > > Charles > > > > On Wed, May 13, 2015 at 1:51 PM, Dan Tenenbaum > > wrote: > > > >> > >> > >> - Original Message - > >> > From: "Charles Determan" > >> > To: r-devel@r-project.org > >> > Sent: Wednesday, May 13, 2015 11:31:36 AM > >> > Subject: [Rd] example fails during R CMD CHECK but works interactively? > >> > > >> > Greetings, > >> > > >> > I am collaborating with developing the bigmemory package and have run > >> > in to > >> > a strange problem when we run R CMD CHECK. For some reason that > >> > isn't > >> > clear to us one of the examples crashes stating: > >> > > >> > Error: memory could not be allocated for instance of type big.matrix > >> > > >> > You can see the output on the Travis CI page at > >> > https://travis-ci.org/kaneplusplus/bigmemory where the error starts > >> > at line > >> > 1035. This is completely reproducible when running > >> > devtools::check(args='--as-cran') locally. The part that is > >> > confusing is > >> > that the calls work perfectly when called interactively. > >> > > >> > Hadley comments on the 'check' page of his R packages website ( > >> > http://r-pkgs.had.co.nz/check.html) regarding test failing following > >> > R CMD > >> > check: > >> > > >> > Occasionally you may have a problem where the tests pass when run > >> > interactively with devtools::test(), but fail when in R CMD check. > >> > This > >> > usually indicates that you’ve made a faulty assumption about the > >> > testing > >> > environment, and it’s often hard to figure it out. > >> > > >> > Any thoughts on how to troubleshoot this problem? I have no idea > >> > what > >> > assumption we could have made. > >> > >> Note that R CMD check runs R with environment variables set as follows > >> (at least on my system; you can check $R_HOME/bin/check to see what it does > >> on yours): > >> > >> R_DEFAULT_PACKAGES= LC_COLLATE=C > >> > >> So try staring R like this: > >> > >> R_DEFAULT_PACKAGES= LC_COLLATE=C R > >> > >> And see if that reproduces the test failure. The locale setting could > >> affect tests of sort order, and the default package setting could > >> potentially affect other things. > >> > >> Dan > >> > >> > >> > >> > > >> > Regards, > >> > Charles > >> > > >> > [[alternative HTML version deleted]] > >> > > >> > __ > >> > R-devel@r-project.org mailing list > >> > https://stat.ethz.ch/mailman/listinfo/r-devel > >> > > >> > > > > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Compiling R on Linux with SunStudio 12.1: "wide-character type" problems
Russ, This is a known issue with Sun Studio on Linux and was fixed by Brian Ripley in January. If you download R-patched.tar.gz from here: ftp://ftp.stat.math.ethz.ch/Software/R/ then it should work for you. Martyn On Mon, 2010-02-22 at 13:11 -0600, rt wrote: > I am trying to compile R on Linux using SunStudio. Configure flags are > mostly as suggested in the R install guide. > > CC=/opt/sun/sunstudio12.1/bin/suncc > CFLAGS="-g -xc99 -xlibmil -xlibmieee" > MAIN_CFLAGS=-g > SHLIB_CFLAGS=-g > CPPFLAGS="-I. -I/opt/sun/sunstudio12.1/prod/include > -I/opt/sun/sunstudio12.1/prod/include/cc" > CPPFLAGS+="-I/opt/sun/sunstudio12.1/prod/include/cc/sys > -I/usr/local/include" > F77=/opt/sun/sunstudio12.1/bin/sunf95 > FFLAGS="-g -O -libmil " > SAFE_FFLAGS="-g -libmil" > CPICFLAGS=-Kpic > FPICFLAGS=-Kpic > SHLIB_LDFLAGS=-shared > LDFLAGS=-L/opt/sun/sunstudio12.1/lib/386 > CXX=/opt/sun/sunstudio12.1/bin/sunCC > CXXFLAGS="-g -xlibmil -xlibmieee" > CXXPICFLAGS=-Kpic > SHLIB_CXXLDFLAGS="-G -lCstd" > FC=/opt/sun/sunstudio12.1/bin/sunf95 > FCFLAGS=$FFLAGS > FCPICFLAGS=-Kpic > MAKE=dmake > > R install guide also indicates that: "The OS needs to have enough support > for wide-character types: this is checked at configuration. Specifically, > the C99 functionality of headers wchar.h and wctype.h, types wctans_t and > mbstate_t and functions mbrtowc, mbstowcs, wcrtomb, wcscoll, wcstombs, > wctrans, wctype, and iswctype." > Configure stops with the following error message: > > checking iconv.h usability... yes > checking iconv.h presence... yes > checking for iconv.h... yes > checking for iconv... in libiconv > checking whether iconv accepts "UTF-8", "latin1" and "UCS-"... yes > checking for iconvlist... yes > checking wchar.h usability... yes > checking wchar.h presence... yes > checking for wchar.h... yes > checking wctype.h usability... yes > checking wctype.h presence... yes > checking for wctype.h... yes > checking whether mbrtowc exists and is declared... yes > checking whether wcrtomb exists and is declared... yes > checking whether wcscoll exists and is declared... yes > checking whether wcsftime exists and is declared... yes > checking whether wcstod exists and is declared... yes > checking whether mbstowcs exists and is declared... yes > checking whether wcstombs exists and is declared... yes > **checking whether wctrans exists and is declared... no > checking whether iswblank exists and is declared... no > checking whether wctype exists and is declared... no > checking whether iswctype exists and is declared... no > configure: error: Support for MBCS locales is required.* > > Relevant parts of config.log are as follows: > > configure:39472: checking whether iswctype exists and is declared > configure:39510: /opt/sun/sunstudio12.1/bin/suncc -o conftest -g -xc99 > -xlibmil -xlibmieee -m32 -I. -I/opt/sun/sunstudio12.1/prod/include > -I/opt/sun/sunstudio12.1/prod/include/cc-I/opt/sun/sunstudio12.1/prod/include/cc/sys > -I/usr/local/include -L/opt/sun/sunstudio12.1/lib/386 -L/usr/local/lib > conftest.c -ldl -lm -liconv >&5 > *"/usr/include/wctype.h", line 112: syntax error before or at: __wc > "/usr/include/wctype.h", line 195: syntax error before or at: towlower > "/usr/include/wctype.h", line 302: syntax error before or at: towupper_l > "/usr/include/wctype.h", line 302: syntax error before or at: __wc > "/usr/include/wctype.h", line 310: syntax error before or at: towctrans_l > "/usr/include/wctype.h", line 310: syntax error before or at: __wc > cc: acomp failed for conftest.c > configure:39516: $? = 1 > configure: failed program was: > | /* confdefs.h. */ > *| #define PACKAGE_NAME "R" > > > *| #include > *| > | #ifdef F77_DUMMY_MAIN > | > | # ifdef __cplusplus > | extern "C" > | # endif > |int F77_DUMMY_MAIN() { return 1; } > | > | #endif > *| int > | main () > | { > | #ifndef iswctype > | char *p = (char *) iswctype; > | #endif > | > | ; > | return 0; > | } > configure:39534: result: no > configure:39710: error: Support for MBCS locales is required.* > > I am not sure if this is a Linux issue or if it is a SunStudio issue. Has > anybody tried to compile R on Linux using SunStudio? > > Thanks in advance, > > Russ > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Compiling R on Linux with SunStudio 12.1: "wide-character type" problems (rt)
You can work around this by disabling large file support (configure --disable-largefile). This seems to be another glibc bug. In the header glob.h, there are two lines where the pre-processor fails to check that __GNUC__ is defined, and it isn't defined when using Sun Studio. Evidently, glibc was designed to work with gcc and has not been extensively tested with other compilers, or other vendors have learned to work around the bugs. Martyn On Tue, 2010-02-23 at 16:55 -0600, rt wrote: > Thank you Martyn, > > I am one step closer. Using R-patched, configure was successful. However, > make exited with an error. > > Configure summary: > Installation directory:/usr/local > C compiler:/opt/sun/sunstudio12.1/bin/suncc -g -O -xc99 > -xlibmil -m32 -xlibmieee -nofstore > Fortran 77 compiler: /opt/sun/sunstudio12.1/bin/sunf95 -g -O > -libmil -m32 -nofstore > C++ compiler: /opt/sun/sunstudio12.1/bin/sunCC -g -O > -xlibmil -m32 -xlibmieee -nofstore > Fortran 90/95 compiler:/opt/sun/sunstudio12.1/bin/sunf95 -g -O > -libmil -m32 -nofstore > Obj-C compiler: > Interfaces supported: X11, tcltk > External libraries:readline, ICU, lzma > Additional capabilities: PNG, JPEG, NLS, cairo > Options enabled: shared BLAS, R profiling, Java > Recommended packages: yes > > MAKE error: > make returned an error related to platform.c and glob.h. > It seems that glob.h has a poiter to struct dirent {..}, platorm.c has > struct dirent64 {..}. > Error message: > /opt/sun/sunstudio12.1/bin/suncc > -I../../src/extra -I. -I../../src/include -I../../src/include -I. > -I/opt/sun/sunstudio12.1/prod/include > > -I/opt/sun/sunstudio12.1/prod/include/cc-I/opt/sun/sunstudio12.1/prod/include/cc/sys > > -DHAVE_CONFIG_H -g -g -O -xc99 -xlibmil -m32 -xlibmieee -nofstore > -c platform.c -o platform.o > "/usr/include/glob.h", line 175: identifier redeclared: glob64 > current : function(pointer to const char, int, pointer to function(..) > returning int, pointer to struct {unsigned int gl_pathc, pointer to pointer > to char gl_pathv, unsigned int gl_offs, int gl_flags, pointer to > function(..) returning void gl_closedir, pointer to function(..) returning > pointer to struct dirent64 {..} gl_readdir, pointer to function(..) > returning pointer to void gl_opendir, pointer to function(..) returning int > gl_lstat, pointer to function(..) returning int gl_stat}) returning int > previous: function(pointer to const char, int, pointer to function(..) > returning int, pointer to struct {unsigned int gl_pathc, pointer to pointer > to char gl_pathv, unsigned int gl_offs, int gl_flags, pointer to > function(..) returning void gl_closedir, pointer to function(..) returning > pointer to struct dirent {..} gl_readdir, pointer to function(..) returning > pointer to void gl_opendir, pointer to function(..) returning int gl_lstat, > pointer to function(..) returning int gl_stat}) returning int : > "/usr/include/glob.h", line 159 > > My cpu is correctly identified as i386 and I included the flag -m32. Do I > need to specify architecture separately? > > thanks, > > Russ > > > Russ, > > > > This is a known issue with Sun Studio on Linux and was fixed by Brian > > Ripley in January. If you download R-patched.tar.gz from here: > > > > ftp://ftp.stat.math.ethz.ch/Software/R/ > > > > then it should work for you. > > > > Martyn > > > > On Mon, 2010-02-22 at 13:11 -0600, rt wrote: > > > I am trying to compile R on Linux using SunStudio. Configure flags are > > > mostly as suggested in the R install guide. > > >> R install guide also indicates that: "The OS needs to have enough > > support > > > for wide-character types: this is checked at configuration. Specifically, > > > the C99 functionality of headers wchar.h and wctype.h, types wctans_t and > > > mbstate_t and functions mbrtowc, mbstowcs, wcrtomb, wcscoll, wcstombs, > > > wctrans, wctype, and iswctype." > > > Configure stops with the following error message: > > > > > > configure:39534: result: no > > > configure:39710: error: Support for MBCS locales is required.* > > > > > > I am not sure if this is a Linux issue or if it is a SunStudio issue. > > Has > > > anybody tried to compile R on Linux using SunStudio? > > > > > > Thanks in advance, > > > > > > Russ > > > > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R spec file change for building on CentOS 5.2 (PR#12939)
Thanks. We are moving away from the one-size-fits-all spec file for R 2.8.0, but you should still be able to rebuild the RedHat 5 source RPM. On Fri, 2008-09-19 at 20:20 +0200, [EMAIL PROTECTED] wrote: > --GZVR6ND4mMseVXL/ > Content-Type: text/plain; charset=us-ascii > Content-Disposition: inline > > > I built R 2.7.2 on CentOS 5.2 today. I used the R.spec file posted > on the web site, but I had to change one line to get it to work. > I have attached the diffs from my spec file to the original. > (Otherwise, the build went fine.) > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] handling spaces in R invocation scripts
Hi Nathan, Do you think you could provide a patch without the formatting and style changes? This would be easier to read. Martyn On Mon, 2008-09-22 at 08:41 -0400, Nathan Coulter wrote: > Nathan Coulter wrote: > > > The attached patch, built against the devel snapshot of 2008-09-20, > > attempts to > > Looks like the patch got stripped out. Here's a link to it: > > http://files.pooryorick.com/2006c1c9f854710cc69ceb5477822ff6a884130a > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] error installing gtools (PR#13168)
Ebi, You need to install the R-devel RPM, which has the header files. If this is a bug (which it arguably is) it is a problem with the Fedora distribution, not with R itself. Martyn On Wed, 2008-10-15 at 15:30 +0200, [EMAIL PROTECTED] wrote: > Full_Name: Ebi Hal > Version: R-2.7.2-1 > OS: Fedora core 9 > Submission from: (NULL) (129.215.170.238) > > > I tried to install gplots by running > R> install.packages("gplots") > but the process failed while installing "gtools" with the following error > message: > > * Installing to library '/usr/lib64/R/library' > * Installing *source* package 'gtools' ... > ** libs > gcc -m64 -std=gnu99 -I/usr/include/R -I/usr/local/include-fpic -O2 -g > -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic -c setTCPNoDelay.c -o > setTCPNoDelay.o > setTCPNoDelay.c:1:15: error: R.h: No such file or directory > setTCPNoDelay.c:2:24: error: Rinternals.h: No such file or directory > setTCPNoDelay.c: In function ‘checkStatus’: > setTCPNoDelay.c:66: warning: implicit declaration of function ‘strncpy’ > setTCPNoDelay.c:66: warning: incompatible implicit declaration of built-in > function ‘strncpy’ > setTCPNoDelay.c:72: warning: implicit declaration of function ‘strerror’ > setTCPNoDelay.c:72: warning: passing argument 2 of ‘strncpy’ makes pointer > from > integer without a cast > make: *** [setTCPNoDelay.o] Error 1 > ERROR: compilation failed for package 'gtools' > ** Removing '/usr/lib64/R/library/gtools' > > Indeed there are no such files as R.h and Rinternals.h > > Any idea how can I solve this problem? > > Thanks a lot, > Ebi > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] HAVE_BZLIB_H not set
That must be a different problem as this one affects both R 2.7.2 and R 2.8.0 on Fedora 9. When the header is not included, the test program that checks the version of bzlib segfaults. We can fix this by using AC_CHECK_HEADERS instead of AC_CHECK_HEADER when looking for bzlib.h, since the former macro defines the missing variable. Checking this in now. Martyn On Sun, 2008-10-26 at 15:24 -0500, Dirk Eddelbuettel wrote: > On 26 October 2008 at 12:02, "Tom \"spot\" Callaway" wrote: > | When building 2.8.0 this morning for Fedora, I noticed that it was > | building the included bzlib2 source and using it rather than the system > | bzip2 libraries and headers. I tracked down the reason to this section > | of configure: > | > | cat >>conftest.$ac_ext <<_ACEOF > | /* end confdefs.h. */ > | > | #ifdef HAVE_BZLIB_H > | #include > | #endif > | int main() { > | char *ver = BZ2_bzlibVersion(); > | exit(strcmp(ver, "1.0.5") < 0); > | } > | > | _ACEOF > | > | That code wasn't working at all because HAVE_BZLIB_H never gets set > | anywhere, even though the configure script had found the system bzip2 > | bits. This patch adds it to m4/R.m4 and configure, against 2.8.0. With > | the patch, R now properly detects bzip2 1.0.5 in Fedora and uses that > | rather than the local copy. > > We had that problem in Debian with (most of ) the 2.7.* series when R thought > it needed to compile bzip2 support itself -- but it didn't before, and it > does no more since where it works in R 2.8.* and its prereleases as ... > > [EMAIL PROTECTED]:~/src/debian/build-logs$ grep "whether bz" r-base_2.7.* > r-base_2.7.0-1.log:checking whether bzip2 support needs to be compiled... yes > r-base_2.7.0.20080304-1.log:checking whether bzip2 support needs to be > compiled... no > r-base_2.7.0~20080408-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.0~20080415-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.0~20080416-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.1-1.log:checking whether bzip2 support needs to be compiled... yes > r-base_2.7.1~20080614-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.1~20080621-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.1.20080621-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.1-2.log:checking whether bzip2 support needs to be compiled... yes > r-base_2.7.2-1.log:checking whether bzip2 support needs to be compiled... yes > r-base_2.7.2~20080816-1.log:checking whether bzip2 support needs to be > compiled... yes > r-base_2.7.2-2.log:checking whether bzip2 support needs to be compiled... yes > [EMAIL PROTECTED]:~/src/debian/build-logs$ grep "whether bz" r-base_2.8.* > r-base_2.8.0-1.log:checking whether bzip2 support needs to be compiled... no > r-base_2.8.0~20081005-1.log:checking whether bzip2 support needs to be > compiled... no > r-base_2.8.0~20081006-1.log:checking whether bzip2 support needs to be > compiled... no > r-base_2.8.0~20081013-1.log:checking whether bzip2 support needs to be > compiled... no > r-base_2.8.0.20081013-1.log:checking whether bzip2 support needs to be > compiled... no > [EMAIL PROTECTED]:~/src/debian/build-logs$ > > ... Kurt fixed that in r-devel in mid-July and told me then that the issue > was a > missing link instruction for -lbz2 in the actual test configure runs, rather > than the string comparison as I had conjectured. > > That makes me think that maybe it is not the matter of the #define you > set. But I defer to Kurt on this. > > Cheers, Dirk > --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] Building with MKL on Ubuntu
It looks like the em64t version of MKL fails the test for the accuracy of zdotu ("checking whether double complex BLAS can be used") and is therefore dropped in favour of R's built-in BLAS. I have just tested this on Fedora and get the same result. The 32-bit MKL does work for me. Martyn On Wed, 2008-11-05 at 16:29 +, Anand Patil wrote: > On Tue, Nov 4, 2008 at 7:34 PM, <[EMAIL PROTECTED]> wrote: > > > I can see a couple of problems: > > > > 1) No "-lpthread" in --with-blas > > 2) No "-L" prefix in the library path in --with-lapack > > > > In addition, I don't think you need to add -lmkl to --with-lapack, > > although that is probably harmless. > > > > Martyn > > > > Quoting Prof Brian Ripley <[EMAIL PROTECTED]>: > > > > > Look in config.log to see what's wrong. (E.g. is > > > /opt/intel/mkl/10.0.2.018/lib/em64t in the ld.so cache?) > > > > > > And note the warnings in the manual about using --with-lapack: it is most > > > definitely not recommended. > > > > > > R-devel would be a better place to ask questions about this. > > > > > > > Thanks Brian and Martyn, > I've tried it again with two sets of configure options: > > ./configure --with-blas='-L/opt/intel/mkl/10.0.2.018/lib/em64t -lmkl -lguide > -lpthread' --with-lapack='-L/opt/intel/mkl/10.0.2.018/lib/em64t -lmkl_lapack' > --enable-R-shlib > > ./configure --with-blas='-L/opt/intel/mkl/10.0.2.018/lib/em64t -lmkl -lguide > -lpthread' --enable-R-shlib > > In both cases, the result is the same: R builds and uses its own BLAS. I > didn't find anything that looked like evidence that /opt/intel/mkl/ > 10.0.2.018/lib/em64t was getting into the ld.so cache, ie I didn't find > 'ld.so' and 'opt/intel...' close together anywhere. Summaries of the BLAS- > or MKL-related bits in config.log follow, with the first set of options: > > configure:36563: checking for dgemm_ in -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -lmkl -lguide -lpthread > configure:36594: gcc -std=gnu99 -o conftest -g -O2 -fpic > -I/usr/local/include -L/usr/local/lib64 conftest.c -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -lmkl -lguide -lpthread -lgfortran -lm -ldl -lm >&5 > conftest.c: In function 'main': > conftest.c:187: warning: implicit declaration of function 'dgemm_' > configure:36600: $? = 0 > configure:36616: result: yes > configure:37408: checking whether double complex BLAS can be used > configure:37481: result: no > ... > BLAS_LIBS0='' > BLAS_LIBS='-L$(R_HOME)/lib$(R_ARCH) -lRblas' > BLAS_SHLIB_FALSE='#' > BLAS_SHLIB_TRUE='' > ... > JAVA_LD_LIBRARY_PATH='$(JAVA_HOME)/lib/amd64/server:$(JAVA_HOME)/lib/amd64:$(JAVA_HOME)/../lib/amd64:/usr/local/lib64::/usr/lib64:/lib64:/usr/local/lib64:/usr/lib:/usr/local/lib:/lib:/opt/intel/fce/10.1.018/lib:/opt/intel/ipp/ > 5.3.4.080/em64t/sharedlib:/opt/intel/cce/10.1.018/lib:/opt/intel/mkl/10.0.2.018/lib/em64t:/usr/java/packages/lib/amd64:/lib:/usr/lib > ' > JAVA_LIBS0='-L$(JAVA_HOME)/lib/amd64/server -L$(JAVA_HOME)/lib/amd64 > -L$(JAVA_HOME)/../lib/amd64 -L/usr/local/lib64 -L -L/usr/lib64 -L/lib64 > -L/usr/local/lib64 -L/usr/lib -L/usr/local/lib -L/lib > -L/opt/intel/fce/10.1.018/lib > -L/opt/intel/ipp/5.3.4.080/em64t/sharedlib-L/opt/intel/cce/10.1.018/lib > -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -L/usr/java/packages/lib/amd64 -L/lib -L/usr/lib -ljvm' > LAPACK_LDFLAGS='' > LAPACK_LIBS='-L$(R_HOME)/lib$(R_ARCH) -lRlapack' > > > and with the second set: > > > configure:36563: checking for dgemm_ in -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -lmkl -lguide -lpthread > configure:36594: gcc -std=gnu99 -o conftest -g -O2 -fpic > -I/usr/local/include -L/usr/local/lib64 conftest.c -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -lmkl -lguide -lpthread -lgfortran -lm -ldl -lm >&5 > conftest.c: In function 'main': > conftest.c:187: warning: implicit declaration of function 'dgemm_' > configure:36600: $? = 0 > configure:36616: result: yes > configure:37408: checking whether double complex BLAS can be used > configure:37481: result: no > ... > BLAS_LIBS0='' > BLAS_LIBS='-L$(R_HOME)/lib$(R_ARCH) -lRblas' > BLAS_SHLIB_FALSE='#' > BLAS_SHLIB_TRUE='' > ... > JAVA_LD_LIBRARY_PATH='$(JAVA_HOME)/lib/amd64/server:$(JAVA_HOME)/lib/amd64:$(JAVA_HOME)/../lib/amd64:/usr/local/lib64::/usr/lib64:/lib64:/usr/local/lib64:/usr/lib:/usr/local/lib:/lib:/opt/intel/fce/10.1.018/lib:/opt/intel/ipp/ > 5.3.4.080/em64t/sharedlib:/opt/intel/cce/10.1.018/lib:/opt/intel/mkl/10.0.2.018/lib/em64t:/usr/java/packages/lib/amd64:/lib:/usr/lib > ' > JAVA_LIBS0='-L$(JAVA_HOME)/lib/amd64/server -L$(JAVA_HOME)/lib/amd64 > -L$(JAVA_HOME)/../lib/amd64 -L/usr/local/lib64 -L -L/usr/lib64 -L/lib64 > -L/usr/local/lib64 -L/usr/lib -L/usr/local/lib -L/lib > -L/opt/intel/fce/10.1.018/lib > -L/opt/intel/ipp/5.3.4.080/em64t/sharedlib-L/opt/intel/cce/10.1.018/lib > -L/opt/intel/mkl/ > 10.0.2.018/lib/em64t -L/usr/java/packages/lib/amd64 -L/lib -L/usr/lib -ljvm' > LAPACK_LDFLAGS='' > LAPACK_LIBS='-L$(R_HOME)/lib$(R_ARCH) -lRlapack' > > > Do either of these give us clues as to what's wrong? > > Th
Re: [Rd] [R] "Error: bad value" problem
This has all the hallmarks of a bug I found and fixed in R-devel (r46998). I did not port the patch over to the R release branch because I could not reproduce the bug. In R-devel, I was seeing problems with "make test-Segfault". This would occasionally segfault, but most of the time would create the "bad value" error, and of course would also run perfectly fine a lot of the time. The error came from exactly the same place that Ben found. It was due to an invalid SrcRefs being used because SrcFile is not set to zero when it should be. I'll have a look and see if it is the same problem, or a close cousin. Martyn On Wed, 2008-12-17 at 22:07 -0500, Duncan Murdoch wrote: > On 17/12/2008 9:47 PM, Duncan Murdoch wrote: > > On 17/12/2008 8:56 PM, Peter Dalgaard wrote: > >> Ben Bolker wrote: > >>> I can get the errors to happen on Ubuntu 8.10 with R --vanilla > >>> (*without* > >>> valgrind) -- but > >>> editing momfit.r line 742 so that plot.progress=FALSE seems to make the > >>> problem go away. (This was a lucky guess, it looked like there was > >>> something > >>> odd going on with the plots.) > >>> > >>> Hope that helps someone ... > >> Probably not. The problem is to reproduce the error state in a way so > >> that we can understand what is causing it. > >> > >> I can debug this to > >> (gdb) bt > >> #0 Rf_error (format=0x8220c65 "bad value") at > >> ../../../R/src/main/errors.c:704 > >> #1 0x0805a924 in SETCDR (x=0x8f89348, y=0x9b276e8) > >> at ../../../R/src/main/memory.c:2728 > >> #2 0x0819fa46 in GrowList (l=0x951e8f4, s=) at > >> gram.y:958 > >> #3 0x081a2a7b in xxvalue (v=0x8f89348, k=4, lloc=) > >> at gram.y:440 > >> > >> and the problem in GrowList is that CAR(l) is R_NilValue (==0x8f89348), > >> which supposedly "cannot happen", and the thing that calls GrowList is > >> something with srcrefs (DuncanM?). > >> > >> Digging deeper probably has to wait till the weekend for my part. (The > >> natural next step is figuring out how the R_NilValue got into that > >> location, but I should try to sleep off this cold) > >> > >> I'm CCing r-devel on this. Can we move the discussion there? > > > > I can probably take a look tomorrow. I wasn't getting an error, but > > maybe I'll see the same corruption if I watch it run. > > I had time to see if I was getting a NilValue there tonight, and the > answer was no, with the Windows RC. I don't get the error in any > version I've tried on Windows, though I can see it in 2.8.0 on MacOSX. > > Duncan > > > > Duncan Murdoch > > > >> > >> > >>> Ben Bolker > >>> > sessionInfo() > >>> R version 2.8.0 (2008-10-20) > >>> i486-pc-linux-gnu > >>> > >>> locale: > >>> LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C > >>> > >>> attached base packages: > >>> [1] stats graphics grDevices utils datasets methods base > >> > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > __ > r-h...@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] "Error: bad value" problem
On Thu, 2008-12-18 at 10:57 +0100, Peter Dalgaard wrote: > Martyn Plummer wrote: > > This has all the hallmarks of a bug I found and fixed in R-devel > > (r46998). I did not port the patch over to the R release branch because > > I could not reproduce the bug. > > > > In R-devel, I was seeing problems with "make test-Segfault". This would > > occasionally segfault, but most of the time would create the "bad value" > > error, and of course would also run perfectly fine a lot of the time. > > The error came from exactly the same place that Ben found. It was due > > to an invalid SrcRefs being used because SrcFile is not set to zero when > > it should be. > > > > I'll have a look and see if it is the same problem, or a close cousin. > > Yes, this does appear to fix the issue. (Did you forget a NEWS file > entry, though?) You mean this? * Fixed obscure, poorly reproducible bug that nobody else has reported. Anyway, it's the same bug. There is a call to parse() in plot.mmfit() which generates a parse error. This has been wrapped in a silent call to try, so you never see the error message: Error in parse(text = colnames(z)[seq(1, 2 * np, 2)]) : unexpected numeric constant in "Parameter 1" This error leaves the parser in a bad state. > It's a two-line change fixing a live bug, so it could go in 2.8.1 even > in code freeze. The problem is that it is inside gram.y which will > trigger some maintainer-mode activity, and did trip up my Fedora laptop > slightly. H > > -p Fedora 10 certainly won't let me do a maintainer mode build. I leave it in your hands. M. > > Martyn > > > > > > On Wed, 2008-12-17 at 22:07 -0500, Duncan Murdoch wrote: > >> On 17/12/2008 9:47 PM, Duncan Murdoch wrote: > >>> On 17/12/2008 8:56 PM, Peter Dalgaard wrote: > >>>> Ben Bolker wrote: > >>>>> I can get the errors to happen on Ubuntu 8.10 with R --vanilla > >>>>> (*without* > >>>>> valgrind) -- but > >>>>> editing momfit.r line 742 so that plot.progress=FALSE seems to make the > >>>>> problem go away. (This was a lucky guess, it looked like there was > >>>>> something > >>>>> odd going on with the plots.) > >>>>> > >>>>> Hope that helps someone ... > >>>> Probably not. The problem is to reproduce the error state in a way so > >>>> that we can understand what is causing it. > >>>> > >>>> I can debug this to > >>>> (gdb) bt > >>>> #0 Rf_error (format=0x8220c65 "bad value") at > >>>> ../../../R/src/main/errors.c:704 > >>>> #1 0x0805a924 in SETCDR (x=0x8f89348, y=0x9b276e8) > >>>> at ../../../R/src/main/memory.c:2728 > >>>> #2 0x0819fa46 in GrowList (l=0x951e8f4, s=) at > >>>> gram.y:958 > >>>> #3 0x081a2a7b in xxvalue (v=0x8f89348, k=4, lloc=) > >>>> at gram.y:440 > >>>> > >>>> and the problem in GrowList is that CAR(l) is R_NilValue (==0x8f89348), > >>>> which supposedly "cannot happen", and the thing that calls GrowList is > >>>> something with srcrefs (DuncanM?). > >>>> > >>>> Digging deeper probably has to wait till the weekend for my part. (The > >>>> natural next step is figuring out how the R_NilValue got into that > >>>> location, but I should try to sleep off this cold) > >>>> > >>>> I'm CCing r-devel on this. Can we move the discussion there? > >>> I can probably take a look tomorrow. I wasn't getting an error, but > >>> maybe I'll see the same corruption if I watch it run. > >> I had time to see if I was getting a NilValue there tonight, and the > >> answer was no, with the Windows RC. I don't get the error in any > >> version I've tried on Windows, though I can see it in 2.8.0 on MacOSX. > >> > >> Duncan > >>> Duncan Murdoch > >>> > >>>> > >>>>> Ben Bolker > >>>>> > >>>>>> sessionInfo() > >>>>> R version 2.8.0 (2008-10-20) > >>>>> i486-pc-linux-gnu > >>>>> > >>>>> locale: > >>>>> LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPH
Re: [Rd] [R] R with MKL
On Tue, 2009-03-17 at 12:12 +0900, Ei-ji Nakama wrote: > Hi > > > I have seen a lot of problems from people trying to compile R with > > MKL. So I am writing my experience in case it helps and to ask one > > question. I installed R-2.8.1.patched in Ubuntu 9.04 (gcc 4.3.3) using > > MKL 10.1.1.019. > > Do you use gcc and gfortran? > > > I configured correctly (following MKL userguide) with : > > > > sudo ./configure --with-blas="-I/opt/intel/mkl/10.1.1.019/include > > -L/opt/intel/mkl/10.1.1.019/lib/em64t -lmkl_intel_lp64 > > -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" > > --with-lapack="-I/opt/intel/mkl/10.1.1.019/include > > -L/opt/intel/mkl/10.1.1.019/lib/em64t -lmkl_intel_lp64 > > -lmkl_intel_thread -lmkl_core -liomp5 -lpthread" > > cited reference https://svn.r-project.org/R/trunk/doc/manual/R-admin.texi > | You are strongly encouraged to read the MKL User's Guide > | > | @example > | MKL=" -...@{mkl_lib_path@} \ > | -Wl,--start-group \ > | $...@{mkl_lib_path@}/libmkl_gf_lp64.a\ > | $...@{mkl_lib_path@}/libmkl_gnu_thread.a \ > | $...@{mkl_lib_path@}/libmkl_core.a \ > | -Wl,--end-group \ > | -liomp5 -lpthread" > | @end example > > However, It is a little different.( -lgomp and configure line) > > MKL=" -...@{mkl_lib_path@} \ > -Wl,--start-group \ > $...@{mkl_lib_path@}/libmkl_gf_lp64.a\ > $...@{mkl_lib_path@}/libmkl_gnu_thread.a \ > $...@{mkl_lib_path@}/libmkl_core.a \ > -Wl,--end-group \ > -lgomp -lpthread" > ./configure --with-blas="$MKL" --with-lapack="$MKL" Yes I see. If you are statically linking to MKL, you want to link to the GNU OMP runtime for portability. Sorry about that. > > But in order to compile had to edit src/modules/lapack/vecLibg95c.c > > and comment out the include. Weird, since I am not building for Mac. > > Please note the thing that ABI of fortran is different with Intel compiler > and GNU compiler. > difficult to detect the mistake. --- This message and its attachments are strictly confidenti...{{dropped:8}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] Date Format
I moved this to R-devel because I am wondering why the base package does not allow you to convert from numeric to Date. Could we not have something like this? as.Date.numeric <- function(x, epoch="1970-01-01", ...) { if (!is.character(epoch) || length(epoch) != 1) stop("invalid epoch") as.Date(epoch, ...) + x } Martyn On Tue, 2006-07-11 at 12:58 -0400, Gabor Grothendieck wrote: > Try this: > > library(zoo) > as.Date(11328) > > See the Help Desk article in R News 4/1 for more on dates. > > > On 7/11/06, pierre clauss <[EMAIL PROTECTED]> wrote: > > Hi everybody, > > I need your precious help for, I think, a simple request, but I do not > > manage to solve this. > > > > When I use a "table" function with dates in the rows, the rows are coerced > > to number after the table function. > > > > So I need to transform the row names into date format. But I do not manage. > > > > Therefore, for an example, I manage to write this : > > > > datetest<-"06/01/2001" > > datetest<-as.Date(datetest,"%d/%m/%Y") > > datetest<-as.numeric(datetest) > > > > to get 11328. > > > > But I do not obtain the inverse tranformation : > > > > datetest<-as.Date(datetest,"%d/%m/%Y") > > > > How do we get this please ? > > > > Thanks a lot for your solution. > > Pierre. > > > > --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [R] Date Format
On Tue, 2006-07-11 at 20:05 +0200, Peter Dalgaard wrote: > Martyn Plummer <[EMAIL PROTECTED]> writes: > > > I moved this to R-devel because I am wondering why the base package does > > not allow you to convert from numeric to Date. Could we not have > > something like this? > > > > as.Date.numeric <- function(x, epoch="1970-01-01", ...) { > >if (!is.character(epoch) || length(epoch) != 1) > > stop("invalid epoch") > >as.Date(epoch, ...) + x > > } > > We could, but you might as well do it explicitly. There's something to > be said for not confusing the concept of dates with a particular > implementation, which is effectively what happens if you can convert > them to and from numeric too seamlessly. Currently you can easily convert one way, but not the other. I just find that a bit odd. Pierre's problem was that his Date objects were converted internally by some function. His first instinct, to use as.Date to convert them back, was, I think, correct. But that doesn't work. So now we say "You have to understand how Date objects are implemented to get your dates back"? I don't know about that. > I'm more perplexed by the failure of adding difftimes to dates: > > > as.Date("2006-1-1") + (as.Date("2006-1-1") - as.Date("2006-1-2")) > [1] "2005-12-31" > Warning message: > Incompatible methods ("+.Date", "Ops.difftime") for "+" > > and if you have a difftime in non-days units, you'll actually get a > wrong result: > > > D1 <- as.Date("2006-1-1") > > D2 <- as.Date("2006-1-2") > > difftime(D2,D1,units="hours") > Time difference of 24 hours > > dd <- difftime(D2,D1,units="hours") > > D1+dd > [1] "2006-01-25" > Warning message: > Incompatible methods ("+.Date", "Ops.difftime") for "+" [I raised this problem earlier in private discussions with Peter] It certainly is perplexing. There is code in "+.Date" that correctly handles the case where the second argument is a difftime. But it will never get called! I wonder if it ever worked. The warning is coming from DispatchGroup (in eval.c). When it finds different methods for two arguments of a binary group generic, it gives up and the default method is called - in this case R_binary in arithmetic.c - which is why the results depends on the implementation of the difftime object. I guessed that this was a limitation of S3 generics, and I suppose I was right. To allow mixing arguments of two classes, you would need code in Ops.foo to handle objects of class bar *and* vice versa. It's a bad idea to have two separate bits of code to do the same job, so I can't fault the logic of forbidding this, but it does leave us with some usability problems. While we are on the topic, is there no function to convert a difftime object from one time scale to another? I found a couple of private functions, but nothing public. Martyn > > On Tue, 2006-07-11 at 12:58 -0400, Gabor Grothendieck wrote: > > > Try this: > > > > > > library(zoo) > > > as.Date(11328) > > > > > > See the Help Desk article in R News 4/1 for more on dates. > > > > > > > > > On 7/11/06, pierre clauss <[EMAIL PROTECTED]> wrote: > > > > Hi everybody, > > > > I need your precious help for, I think, a simple request, but I do not > > > > manage to solve this. > > > > > > > > When I use a "table" function with dates in the rows, the rows are > > > > coerced to number after the table function. > > > > > > > > So I need to transform the row names into date format. But I do not > > > > manage. > > > > > > > > Therefore, for an example, I manage to write this : > > > > > > > > datetest<-"06/01/2001" > > > > datetest<-as.Date(datetest,"%d/%m/%Y") > > > > datetest<-as.numeric(datetest) > > > > > > > > to get 11328. > > > > > > > > But I do not obtain the inverse tranformation : > > > > > > > > datetest<-as.Date(datetest,"%d/%m/%Y") > > > > > > > > How do we get this please ? > > > > > > > > Thanks a lot for your solution. > > > > Pierre. > > > > > > > > > > > > --- > > This message and its attachments are strictly confidential. ...{{dropped}} > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > --- This message and its attachments are strictly confidential. ...{{dropped}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel