[Rd] Cannot make 3 different R installations for 3 different valgrind-instrumentation levels
"R Installation and Administration", section 2.5 "Sub-architectures" describes calling specific builds of R using the call "R --arch=name". I am trying to build and install three versions of R-2.9.1, each configured with a different valgrind-instrumentation level ("Writing R Extensions", section 4.3.2 "Using valgrind"). My goal is to be able to choose which install I launch so that I can choose among the valgrind levels with commands such as R --arch=valgrind0 -d valgrind --vanilla < myPkg.R or R --arch=valgrind1 -d valgrind --vanilla < myPkg.R or R --arch=valgrind2 -d valgrind --vanilla < myPkg.R R-2.9.1.tar.gz is unzipped to /usr/local/src/R-2.9.1 and I have tried executing (several variations of) the following code from that directory sudo ./configure --enable-memory-profiling --with-valgrind-instrumentation=2 r_arch=vg2 --with-recommended-packages=no sudo make sudo make install rhome=/usr/local/lib/R-2.9.1_vg2 sudo ./configure --enable-memory-profiling --with-valgrind-instrumentation=1 r_arch=vg1 --with-recommended-packages=no sudo make sudo make install rhome=/usr/local/lib/R-2.9.1_vg1 sudo ./configure --enable-memory-profiling --with-valgrind-instrumentation=0 r_arch=vg0 --with-recommended-packages=no sudo make sudo make install rhome=/usr/local/lib/R-2.9.1_vg0 using this approach the second "sudo make" results in error. I also tried installing from three seperate copies of the R-2.9.1 directory i.e. from /usr/local/src/R-2.9.1_valgrind0 /usr/local/src/R-2.9.1_valgrind1 /usr/local/src/R-2.9.1_valgrind2 with this set up the above code runs without error and the shell command "R --arch=valgrind0" works (level zero being the level used in the most recent install) but the shell commands "R --arch=valgrind1" and "R --arch=valgrind2" result in error messages. I can't figure out how to get this working, although it seems perfectly natural to want to do this. It would be great if someone could point me in the right direction and perhaps a workable example could be added to the "R Installation and Administration" document too. I am running a recent install of Ubuntu (Jaunty) on a 32 bit laptop with all previous R installations removed / uninstalled. thanks David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Cannot make 3 different R installations for 3 different valgrind-instrumentation levels
You're confusing rhome and arch - the above makes no sense. Let rhome alone and you should be fine. (And make sure you're not building in the source tree - you should be using something like mkdir obj_vg0 && cd obj_vg0 && ../R-2.9.1/configure ...) Thanks Simon. I'd already tried various variations with rhome and arch, but I think my biggest error was that I was always compiling from within the source tree. However, the following script still doesn't quite do the job cd /usr/local/lib mkdir R-2.9.2_vg0 mkdir R-2.9.2_vg1 mkdir R-2.9.2_vg2 cd /usr/local/lib/R-2.9.2_vg2 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=2 r_arch=vg2 sudo make sudo make install cd /usr/local/lib/R-2.9.2_vg1 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=1 r_arch=vg1 sudo make sudo make install cd /usr/local/lib/R-2.9.2_vg0 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=0 r_arch=vg0 sudo make sudo make latex sudo make dvi sudo make pdf sudo make info sudo make help sudo make html sudo make uninstall sudo make install Now the command "R --arch=vg0" executes fine, but... da...@david > R --arch=vg1 /usr/local/bin/R: line 230: /usr/local/lib/R/bin/exec/vg1/R: No such file or directory /usr/local/bin/R: line 230: exec: /usr/local/lib/R/bin/exec/vg1/R: cannot execute: No such file or directory da...@david > R --arch=vg2 /usr/local/bin/R: line 230: /usr/local/lib/R/bin/exec/vg2/R: No such file or directory /usr/local/bin/R: line 230: exec: /usr/local/lib/R/bin/exec/vg2/R: cannot execute: No such file or directory is suspect installing to seperate _vg0, _vg1 and _vg2 directories could be the problem. I'll retry and post the results. David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Cannot make 3 different R installations for 3 different valgrind-instrumentation levels
is suspect installing to seperate _vg0, _vg1 and _vg2 directories could be the problem. I'll retry and post the results. Well, the following script didn't work cd /usr/local/lib sudo mkdir R cd /usr/local/lib/R sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=2 r_arch=vg2 sudo make sudo make install sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=1 r_arch=vg1 sudo make sudo make install sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=0 r_arch=vg0 sudo make sudo make latex sudo make dvi sudo make pdf sudo make info sudo make help sudo make html sudo make install the relevent lines of config.log appear to be configure:5546: gcc -E -I/usr/local/include conftest.c conftest.c:16:28: error: ac_nonexistent.h: No such file or directory configure:5552: $? = 1 configure: failed program was: | /* confdefs.h. */ | #define PACKAGE_NAME "R" | #define PACKAGE_TARNAME "R" | #define PACKAGE_VERSION "2.9.2" | #define PACKAGE_STRING "R 2.9.2" | #define PACKAGE_BUGREPORT "r-b...@r-project.org" | #define PACKAGE "R" | #define VERSION "2.9.2" | #define R_PLATFORM "i686-pc-linux-gnu" | #define R_CPU "i686" | #define R_VENDOR "pc" | #define R_OS "linux-gnu" | #define Unix 1 | #define R_ARCH "vg2" | /* end confdefs.h. */ | #include configure:5585: result: gcc -E configure:5614: gcc -E -I/usr/local/include conftest.c configure:5620: $? = 0 configure:5651: gcc -E -I/usr/local/include conftest.c conftest.c:16:28: error: ac_nonexistent.h: No such file or directory configure:5657: $? = 1 configure: failed program was: | /* confdefs.h. */ any ideas? David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Cannot make 3 different R installations for 3 different valgrind-instrumentation levels
^^-- this is really bad form - you should never build/compile software as root. The location of the build directory is irrelevant so use /tmp or you home or something like that ... (usualy the fastest disk ;)) OK, thanks for the tip, although I don't understand the configure - make - install cycle enough to see how compiling as non-root and installing as root is any safer than just doing both as root. Why would you do all that? Well, when you've been trying for hours to get something to work and it doesn't and there are no workable examples on the net... is it too much to ask that a workable example be placed in the documentation? Finally I have a script that has worked for me (writen before I saw Simon's comment about root) cd /usr/local/lib umask 022 mkdir R-2.9.2 mkdir build_vg0 mkdir build_vg1 mkdir build_vg2 cd /usr/local/lib/build_vg2 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=2 r_arch=vg2 sudo make sudo make prefix=/usr/local/lib/R-2.9.2 install cd /usr/local/lib/build_vg1 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=1 r_arch=vg1 sudo make sudo make prefix=/usr/local/lib/R-2.9.2 install cd /usr/local/lib/build_vg0 sudo /usr/local/src/R-2.9.2/configure --enable-memory-profiling --with-valgrind-instrumentation=0 r_arch=vg0 sudo make sudo make latex sudo make dvi sudo make pdf sudo make info sudo make help sudo make html sudo make prefix=/usr/local/lib/R-2.9.2 install sudo cp /usr/local/lib/R-2.9.2/bin/R /usr/local/bin/R IMHO, if the surpless newbie chaff was whittled off this example might be useful to others in the official documentation. David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] enabling core dumps
"Writing R Extensions" says {quotes} If you have a crash which gives a core dump you can use something like gdb /path/to/R/bin/exec/R core.12345 to examine the core dump. If core dumps are disabled... {unquotes} sadly it doesn't go on to say how to enable if core dumps are disabled. I understand that in bash I need to do $ ulimit -c unlimited but this doesn't seem to be enough, I still don't find a core file despite *** caught segfault *** address 0x2028, cause 'memory not mapped' Possible actions: 1: abort (with core dump) 2: normal R exit 3: exit R without saving workspace 4: exit R saving workspace Selection: 1 I am running Ubuntu jaunty on a laptop. Any ideas as to what I might need to configure next? thanks David -- David Pleydell UMR BGPI CIRAD TA A-54/K Campus International de Baillarguet 34398 MONTPELLIER CEDEX 5 FRANCE Tel: +33 4 99 62 48 65 - Secrétariat : +33 4 99 62 48 21 Fax : +33 4 99 62 48 22 http://umr-bgpi.cirad.fr/trombinoscope/pleydell_d.htm https://sites.google.com/site/drjpleydell/ __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
I forgot to add that I am compiling with R CMD SHLIB buggyCode.c --ggdb thanks David Quoting pleyd...@supagro.inra.fr: "Writing R Extensions" says {quotes} If you have a crash which gives a core dump you can use something like gdb /path/to/R/bin/exec/R core.12345 to examine the core dump. If core dumps are disabled... {unquotes} sadly it doesn't go on to say how to enable if core dumps are disabled. I understand that in bash I need to do $ ulimit -c unlimited but this doesn't seem to be enough, I still don't find a core file despite *** caught segfault *** address 0x2028, cause 'memory not mapped' Possible actions: 1: abort (with core dump) 2: normal R exit 3: exit R without saving workspace 4: exit R saving workspace Selection: 1 I am running Ubuntu jaunty on a laptop. Any ideas as to what I might need to configure next? thanks David -- David Pleydell UMR BGPI CIRAD TA A-54/K Campus International de Baillarguet 34398 MONTPELLIER CEDEX 5 FRANCE Tel: +33 4 99 62 48 65 - Secrétariat : +33 4 99 62 48 21 Fax : +33 4 99 62 48 22 http://umr-bgpi.cirad.fr/trombinoscope/pleydell_d.htm https://sites.google.com/site/drjpleydell/ -- David Pleydell UMR BGPI CIRAD TA A-54/K Campus International de Baillarguet 34398 MONTPELLIER CEDEX 5 FRANCE Tel: +33 4 99 62 48 65 - Secrétariat : +33 4 99 62 48 21 Fax : +33 4 99 62 48 22 http://umr-bgpi.cirad.fr/trombinoscope/pleydell_d.htm https://sites.google.com/site/drjpleydell/ __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
not really answering your question, but I find it more useful to R -d gdb or R -d gdb -f test.R where test.R reproduces the bug in some minimal code. A variant is R -d valgrind -f test.R if the memory problem is not easy to spot. Thanks for your reply Martin Yes, I have used that route before, I have also been playing with the emacs "M-x gdb" option as describe in the R FAQ. But having no first hand expertience with core dumps I am curious why you prefer the -d flag route. Perhaps I'm wrong but I thought examining a core dump enables you to backtrace from the moment things went wrong, this seems to be a useful trick to have... ... if you manage to enable the core dump option that is. cheers David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
To answer my own question. My mistake was that "ulimit -c unlimited" applies to the current bash session only. I had used this call in a bash *shell* buffer in emacs but this was unable to affect R processes started in emacs with C-u M-x R, hence no core files. Running the buggy code from R started in a bash shell after running ulimit resulted in a core file being generated in the R working directory. It's not the cleanest of emacs solutions, but at least it works. David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
usually what happens is (# meant to be a comment char) % R -d gdb -f test.R gdb> run ...segfault happens, breaks into gdb gdb> bt # print the backtrace gdb> up # move up the stack, to get to 'your' frame gdb> l # show source listing, use -O0 compiler flag, see gdb> help dir gdb> print some_suspect_variable gdb> call Rf_PrintValue(some_suspect_sexp) gdb> break suspect_function gdb> run # restart script, but break at suspect_function and so on, i.e., you've got all the info you need. A neat trick is to leave gdb running, repair and R CMD SHLIB your C code, return to gdb and gdb> run to restart the same script but using the new shared lib (possibly preserving breakpoints and other debugging info you'd used in previous sessions). I'm a heavy emacs user but find it easier to stick with gdb from the shell -- one less layer to get in the way, when I'm confused enough as it is. Wow! Thanks for the detailed reply, your approach makes perfect sense... ... especially given that my core file was for some unknown reason 0 bytes which gdb didn't find too funny. cheers David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Unexpected behaviour of "next line" in gdb
I compile some C code using R CMD SHLIB buggyCode.c --ggdb --O0 I thought the -O0 flag would avoid gdb ~ optimisation problems. But each time I step through the code in gdb using "n" (next line) the executed line order is not as expected ("looks" more like a random walk around the expected next line). So to me it looks like there is still a gdb ~ optimisation conflict. Does this mean I have to have a copy of R compiled from source using the -O0 flag too? thanks David -- David Pleydell UMR BGPI CIRAD TA A-54/K Campus International de Baillarguet 34398 MONTPELLIER CEDEX 5 FRANCE Tel: +33 4 99 62 48 65 - Secrétariat : +33 4 99 62 48 21 Fax : +33 4 99 62 48 22 http://umr-bgpi.cirad.fr/trombinoscope/pleydell_d.htm https://sites.google.com/site/drjpleydell/ __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Unexpected behaviour of "next line" in gdb
Looking through the archives I found this thread http://tolstoy.newcastle.edu.au/R/e4/devel/08/02/0347.html and so I tried MAKEFLAGS="CFLAGS=-g -O" R CMD SHLIB ... and this worked great. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Building R package with .c sub-routine files
Lets say I have two source files file1.c and file2.c The latter just contains sub-routines to be used by the first. i.e. in file1.c I have the line #include "file2.c" Let's say "R CMD SHLIB file1.c" runs perfectly and I want to include the code in a package, "R CMD build" also runs fine but R CMD check" gives * checking whether package 'myPackage' can be installed ... ERROR Installation failed. See '/pathto/myPackage.Rcheck/00install.out' for details. basically the compiler is trying to compile file2.c independantly of file1.c which is not what I want and prevents a proper build What's the easiest way to enforce the correct file dependencies when building R packages? cheers David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Building R package with .c sub-routine files
Create a file named Makevars in the same directory and put the following line in it: OBJECTS=file1.o Then R CMD SHLIB will only compile file1.c. Kjell Great, that's done the job nicely. Many thanks David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Strange behaviour with global variable in C
I understand global variables can be a bad idea, but even so I would like to understand what is going on here... ### DESCRIPTION OF PROGRAM ### ...I have a strange bug on a global variable in some C code which I am compiling using $ MAKEFLAGS="CFLAGS=-g -O0" R CMD SHLIB myProgram.c the global variable in question is the log likelihood. In an old version of the program I initialized this as double loglik = -999 and in current versions I initialize this as double loglik = 0.0 and long sequences of 9s do not appear anywhere in the program now (I confirmed this using the grep command in bash) A function called update_loglik() exists in the file loglik.c and so myProgram.c includes the line #include "loglik.c" prior to the main() function. The first line in the function update_loglik() is loglik = 0.0; and then later in the function there is a loop containing the line loglik += some_value_corresponding_to_one_observation ### DESCRIPTION OF BUG ### If I add printf("%f",loglik) at the second line of update_loglik() it prints "0.0", BUT if I set a breakpoint here and execute (gdb) p loglik in gdb it prints -999 worse, the value being added to in the loop of update_loglik() is neither that being printed by printf nor that being printed by gdb. Moreover, if I put update_loglik() into a loop and printf the values I get itter 0 loglik -1242105.108051 itter 1 loglik -602880.293985 itter 2 loglik -590470.733006 itter 3 loglik -578061.172026 itter 4 loglik -565651.611046 itter 5 loglik -553242.050066 itter 6 loglik -540832.489086 itter 7 loglik -528422.928106 ### A CLUE ### This is clearly a pointer problem, in fact I believe gdb gives us a good clue (gdb) b loglik.c:100 Breakpoint 3 at 0xb7a2eba4: file loglik.c, line 100. (2 locations) (gdb) i b Num Type Disp Enb AddressWhat 3 breakpoint keep y0xb7a2eba4 3.1 y 0xb7a2eba4 in update_loglik at loglik.c:100 3.2 y 0xb7a2895a in update_loglik at loglik.c:100 for some reason gdb associates this breakpoint with two addresses (line 100 by the way is where I try to set loglik to 0.0, described above) I should perhaps also add that my project is subject to bzr (bazaar) version control, so I wonder if this almost ghost-like resurrection of the - is due to either gcc or gdb confusing information from current bzr versions with previous bzr versions. This resurrection occurs even after turning off the computer and rebooting so it shouldn't be to do with memory leaks. So why is gdb giving multiple addresses for a single line breakpoint and why gdb's ghostly resurrection of a long line of 9s even after a reboot? thanks David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Strange behaviour with global variable in C
To answer my own question... I have two copies of my program 1) a working copy stored in $PROJECT/analysis/c 2) a packaged copy stored in $PROJECT/analysis/myPackage_1.0.2.tar.gz I have been running a script which does the following library(myPackage) load(myData) detach("package:myPackge") dyn.load("$PROJECT/analysis/c/myProgram.so") .C("main", ...) The "resurrection" comes from values of loglik set in the packaged version and not used in the working version. I had assumed detach would remove all traces of the package but this is clearly wrong. I wonder what over developers do when they want to switch from working with a packaged version to a working version without restarting R and without generating conflicts between the two? (Perhaps I can find that in documentation somewhere...) David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
usually what happens is (# meant to be a comment char) % R -d gdb -f test.R gdb> run ...segfault happens, breaks into gdb gdb> bt # print the backtrace gdb> up # move up the stack, to get to 'your' frame gdb> l # show source listing, use -O0 compiler flag, see gdb> help dir gdb> print some_suspect_variable gdb> call Rf_PrintValue(some_suspect_sexp) gdb> break suspect_function gdb> run # restart script, but break at suspect_function to continue a slightly old thread... ... If I launch gdb this way I don't have any means to navigate through previously executed gdb lines using M-p and M-n, but following a (gdb) run or (gdb) continue I can use M-p and M-n to recall previous R commands If I launch gdb in other ways M-p and M-n function as expected. I suppose there is a confusion between gdb's M-p and ESS's M-p so that M-p only functions in the ESS session. I quite like running gdb using R's -d flag, but not being able to navigate through the line history is sub-optimal. I'd be interested to hear if other people running gdb this way encountered this problem and how they resolved it. thanks David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] enabling core dumps
... If I launch gdb this way I don't have any means to navigate through previously executed gdb lines using M-p and M-n, but following a (gdb) run or (gdb) continue I can use M-p and M-n to recall previous R commands If I launch gdb in other ways M-p and M-n function as expected. I suppose there is a confusion between gdb's M-p and ESS's M-p so that M-p only functions in the ESS session. I quite like running gdb using R's -d flag, but not being able to navigate through the line history is sub-optimal. I'd be interested to hear if other people running gdb this way encountered this problem and how they resolved it. This had a one line solution (section 19.3, page 216, gdb manual) gdb> set history save on __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R CMD check error with the GNU Scientific Library
I have been working on an R package that calls C code using .C(). I recently started including some functions from the GNU Scientific Library in my code. The code runs fine on my machine when not wrapped in the package. But I get the following error from "R CMD check" * checking whether the package can be loaded ... ERROR Loading required package: splancs Loading required package: sp Spatial Point Pattern Analysis Code in S-Plus Version 2 - Spatial and Space-Time analysis Loading required package: ellipse Loading required package: mvtnorm Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared library '/home/david/papers/inProgress/sharka_istanbull/package/sharka.Rcheck/sharka/libs/sharka.so': /home/david/papers/inProgress/sharka_istanbull/package/sharka.Rcheck/sharka/libs/sharka.so: undefined symbol: gsl_multimin_fminimizer_nmsimplex Error in library(sharka) : .First.lib failed for 'sharka' Execution halted Clearly there is some difficulty linking up with gsl_multimin_fminimizer_nmsimplex. I noticed the QRMlib library also includes gsl functions. In that package they include a src/gsl directory with the required .h files and in Makevars they have "PKG_CFLAGS = -I./gsl". I have copied this approach, but wonder if using the standard "R CMD build myPackage" "R CMD check myPackage" needs modifying in some way? All hints or ideas welcome. Thanks David -- David Pleydell UMR BGPI CIRAD TA A-54/K Campus International de Baillarguet 34398 MONTPELLIER CEDEX 5 FRANCE Tel: +33 4 99 62 48 65 - Secrétariat : +33 4 99 62 48 21 Fax : +33 4 99 62 48 22 http://umr-bgpi.cirad.fr/trombinoscope/pleydell_d.htm https://sites.google.com/site/drjpleydell/ __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R CMD check error with the GNU Scientific Library
You'll need something like : PKG_LIBS=-lgsl -lgslcblas in your Makevars. This is from package gsl (on CRAN). Of course! That makes sense 'cause I was already compiling using MAKEFLAGS="CFLAGS=-g -O0" R CMD SHLIB sharka.c -lgsl -lgslcblas and including the above line in Makevars has done the job, great!! I was copying QRMlib and not the gsl package since the former includes .h files while the later uses .c files. I have a binary and not source installation of gsl (not the R package) and so .h files were more readily accessible to me. I wonder about the pros / cons of using .c vs. .h in an R package and how QRMlib was compiled without the above line in Makevars. Thanks for your help David __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel