On Sep 4, 2009, at 12:11 , pleyd...@supagro.inra.fr wrote:
not really answering your question, but I find it more useful to
R -d gdb
or
R -d gdb -f test.R
where test.R reproduces the bug in some minimal code. A variant is
R -d valgrind -f test.R
if the memory problem is not easy to spot.
Thanks for your reply Martin
Yes, I have used that route before, I have also been playing with
the emacs "M-x
gdb" option as describe in the R FAQ. But having no first hand
expertience with
core dumps I am curious why you prefer the -d flag route. Perhaps
I'm wrong but
I thought examining a core dump enables you to backtrace from the
moment things
went wrong, this seems to be a useful trick to have...
.. this is the same with gdb - the core dump just saves the same state
that you have at the point where gdb comes in. The only practical
difference (I'm aware of) is that with core dump you can repeatedly re-
start your analysis back to the point of crash without running all the
code that has lead to it at the cost of storing the entire memory
content (which can be huge these days so often using gdb is sufficient
and faster...).
... if you manage to enable the core dump option that is.
ulimit -c unlimited
is the right way, but you a) have to have rights to do than and b) the
kernel must have core dumps enabled. On Debian I have no issue:
urba...@corrino:~$ ulimit -c unlimited
urba...@corrino:~$ R --slave < segfault.R
*** caught segfault ***
address 0x3, cause 'memory not mapped'
Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection: 1
aborting ...
Segmentation fault (core dumped)
Note the "core dumped" in the last line. Also some systems put core
dumps in a dedicated directory, not in the current one. If in doubt,
google for core dumps and your distro - this is not really an R
issue ...
Cheers,
Simon
______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel