Re: [Rd] problem with \eqn (PR#8322)

2005-11-18 Thread Ross Boylan
I put a response into the bug tracker, but I don't think it took.  So
I'm sending this; apologies if it's a duplicate.

On Fri, Nov 18, 2005 at 04:38:28PM +, Hin-Tak Leung wrote:
> Your own fault. 

Not on the evidence you presented.  line 1.7 below is in error, but
that's not my input.  My input was 
\eqn{{\bf\beta}_j}{b(j)}

The process of reading the .Rd file and generating the file for LaTeX
to process somehow doubles the initial LaTeX part, which is
{{\bf\beta}_j}  (and the ascii part is {b(j)}).

I suspect that the nested {} is causing trouble.

>See below. It is basic LaTeX and any LaTeX person
> can tell you the answer...(most probably haven't bothered...)
> 
> [EMAIL PROTECTED] wrote:
> >Full_Name: Ross Boylan
> >Version: 2.2.0
> >OS: Linux
> >Submission from: (NULL) (65.175.48.58)
> >
> >
> >  \eqn{{\bf\beta}_j}{b(j)} in my .Rd file produces this error
> 
> 
> >
> >! Missing $ inserted.
> > 
> >$
> >l.7 \eqn{{\bf\beta}_j}{\bf\beta}_
> > jnormal-bracket5bracket-normal{b(j)}
> 
> \eqn{{\bf\beta}_j} is already syntactically complete, so latex
> complains the next "_" is not in maths mode, and automatically
> switch into maths mode for you (the $ inserted message) You have
> to match all the braces - you need 3 right-braces after \eqn,
> like this, at least:
> 
> \eqn{  {  {\bf\beta
>   }_j
>}
>{\bf\beta
>}_ 
>{b(j)
>}
> }
> 
The nesting of braces above seems to have only the LaTeX part, i.e, at
top level it is \eqn{} not \eqn{}{}.  My intent was the latter.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] problem with \eqn (PR#8322)

2005-11-23 Thread Ross Boylan
On Mon, 2005-11-21 at 10:27 +, Hin-Tak Leung wrote:
> Kurt Hornik wrote:
> 
> > Definitely a problem in Rdconv.
> > 
> > E.g.,
> > 
> > $ cat foo.Rd 
> > \description{
> >   \eqn{{A}}{B}
> > }
> > [EMAIL PROTECTED]:~/tmp$ R-d CMD Rdconv -t latex foo.Rd | grep eqn
> > \eqn{{A}}{A}{{B}
> > 
> > shows what is going on.
> 
> There is a "work-around" - putting extra spaces between the two braces:
> 
> $ cat foo.Rd
> \description{
>\eqn{ {A} }{B}
> }
> 
> $R CMD Rdconv -t latex foo.Rd
> \HeaderA{}{}{}
> \begin{Description}\relax
> \eqn{ {A} }{B}
> \end{Description}
> 
> 
> HT
Terrific!  I can confirm that works for me and, in a way, a work-around
is better than a fix.  With the work-around, I can distribute the
package without needing to require that people get some not-yet-release
version of R that fixes the problem.  I do hope the problem gets fixed
though :)

By the way, I  couldn't see how the perl code excerpted earlier paid any
attention to {}.  But perl is not my native tongue.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Makefiles and other customization

2005-11-23 Thread Ross Boylan
Writing R Extensions mentions that a package developer can provide a
Makefile, but gives very little information about what should be in it.
It says there must be a clean target, and later on there's mention of 
 $(SHLIB): $(OBJECTS)
 $(SHLIB_LINK) -o $@ $(OBJECTS) $(ALL_LIBS)
(in the F95 discussion).

What should a Makefile provide, and what can it assume?  In other words,
what variables and environment setup should have been done?  My guess is
that all the R boilerplate for Makefiles will have been read before the
Makefile I provide.  It appears from the F95 example that the Makefile
has to get the names of the files it needs itself.

I suspect this is not documented more fully because of the extreme
difficulty of writing a portable Makefile.  However, I already have a
"Makefile.full", so called to avoid having R use it.  Makefile.full does
lots of stuff, so portability is already compromised.  I'm thinking it
might be more direct to provide "Makefile," since I'm now trying to
alter what R CMD build does.

I posted a related question on r-help, before I realized this kind of
issue is more appropriate for this list.  The question I asked there was
whether it would be reasonable to do my own tar of the files I wanted to
distribute in place of using R CMD build.  I'm also interested in
knowing about that.
https://stat.ethz.ch/pipermail/r-help/2005-November/081758.html (though
the thread has so far been on a tangential issue).

Here is that first post, if you want more background:
---
I've made a package for which R CMD build isn't producing very
satisfactory results.  I'll get to the details in a moment.

I wonder if it would make sense to have my own makefiles (which already
exist and are doing quite a lot) produce the .tar.gz file ordinarily
produced by R CMD build.  As far as I can tell, R CMD build basically
tars up of the project directory after running some checks.  I could run
R CMD check separately.

There are two main problems with the results of R CMD build.  First, it
has lots of files that I don't want included (the input files used to
generate configure, miscellaneous garbage, other stuff not suitable for
distribution).  Second, I have data files as both "data.gz" and "data".
R puts "data" into the .tar.gz file and sensibly ignores the .gz file.
Unfortunately, my makefiles assume the existence of the "data.gz" files,
and so may have trouble after the .tar.gz is unpacked and there are no
"data.gz" files.

My bias would ordinarily be to piggy back on the R build system as much
as possible.  In principle, this could get me extra features (binary
builds, MS Windows builds) and it would track the things R build does
beyond tarring files.  But in this case using the R build system seems
quite ugly.  I could in principle use .Rbuildignore, probably generated
dynamically, to exclude files.  That doesn't solve the 2nd problem
(data.gz becomes data).

So does the alternative of doing the tar'ing myself make sense?

Is there another option that could hook into the R CMD build process
more deeply than the use of .Rbuildignore?

I suppose another option would be to do a clean checkout of the sources
for my package, run a special makefile target that would create the
necessary files and delete all unwanted files, and then do a regular R
CMD build.  This might still have trouble with "data.gz".
--

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Makefiles and other customization

2005-11-23 Thread Ross Boylan
On Wed, 2005-11-23 at 15:47 -0800, Kasper Daniel Hansen wrote:
> Well you can have a look at etc/Makeconf. I have had some troubles  
> understanding the make process myself (which probably reveals I am  
> not a make guru), but it really depends on what you want to  
> accomplish - and from a certain perspective it is all documented in  
> the sources.

Makeconf sets environment variables and general rules for building
different types of files, but it doesn't have any regular targets.
While the answer is in the source, the full answer doesn't seem to be in
that particular piece.

> 
> I think you need to describe what exactly you want to do, perhaps  
> even post a copy of your Makefie.
My build system is a bit baroque; my first question is what a Makefile
should look like that simply duplicates the existing functionality.  In
other words, without a Makefile R can compile my code, check the
package, and build something for distribution.  If I want to add a
Makefile that preserves all this behavior, what needs to be in it?

I need something extra for two main reasons.  First, I'm using the fweb
literate programming system, so I maintain a .web file which must be
processed to get program sources and documentation.  Second, I have a
lot of unit tests of my code, and I want to exercise them apart from
simply building and testing the package as a whole.  All of this
requires assistance from the GNU autotools.

> 
> In case you include code which needs to be compiled and distributed  
> to various platforms you definitely want R to do the compilation.
> 
Since R does a lot, it would be better to try to build on that.  But if
it's too much of a fight, I might want to roll my own.  Creating a
source distribution, for example, looked like something I might be able
to do in my own makefile.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] external pointers

2005-12-09 Thread Ross Boylan
I have some C data I want to pass back to R opaquely, and then back to
C.  I understand external pointers are the way to do so.

I'm trying to find how they interact with garbage collection and object
lifetime, and what I need to do so that the memory lives until the
calling R process ends.

Could anyone give me some pointers?  I haven't found much documentation.
An earlier message suggested looking at simpleref.nw, but I can't find
that file.

So the overall pattern, from R, would look like
opaque <- setup(arg1, arg2, )  # setup calls a C fn
docompute(arg1, argb, opaque)  # many times. docompute also calls C
# and then when I return opaque and  the memory it's wrapping get
#cleaned up.  If necessary I could do
teardown(opaque)  # at the end

"C" is actually C++ via a C interface, if that matters.  In particular,
the memory allocated will likely be from the C++ run-time, and needs C++
destructors.

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] checkpointing

2006-01-02 Thread Ross Boylan
I would like to checkpoint some of my calculations in R, specifically
those using optim.  As far as I can tell, R doesn't have this facility,
and there seems to have been little discussion of it.

checkpointing is saving enough of the current state so that work can
resume where things were left off if, to take my own example, the system
crashes after 8 days of calculation.

My thought is that this could be added as an option to optim as one of
the control parameters.

I thought I'd check here to see if anyone is aware of any work in this
area or has any thoughts about how to proceed.  In particular, is save a
reasonable way to save a few variables to disk?  I could also make the
code available when/if I get it working.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] checkpointing

2006-01-03 Thread Ross Boylan
On Tue, Jan 03, 2006 at 01:26:39PM +, Prof Brian Ripley wrote:
> On Tue, 3 Jan 2006, Kasper Daniel Hansen wrote:
> 
> >On Jan 3, 2006, at 9:36 AM, Brian D Ripley wrote:
> >
> >>I use save.image() or save(), which seem exactly what you are asking for.
> >
> >I have the (perhaps unsupported) impression that Ross wanted to save the 
> >progress during the optim run. Since it spends most of its time in the 
> >.Internal(optim(***)) call, save/save.image would not work.
> 
> It certainly does not!  
I'm having trouble following; does that sentence mean the preceding
one is wrong, or that save won't work.

> It is most likely spending time in the callbacks 
> to evaluate the function/gradient.  
Yes.
> We have used save() to save the 
> current information (e.g. current parameter values) from inside optim so a 
> restart could be done, 
Did you do this by
* using an existing feature of optim I don't know about;
* modifying the code for optim
* writing an objective function that saved the parameters with which
  it was called (which, now that I think of it, might be the simplest
  approach)?

My guess was that optim keeps its state in local variables that would
not be captured by a save.image.  Are you saying the relevant
variables are saved and can be fished out if needed?

It would also probably save some time if the estimated matrix of 2nd
derivatives were saved too (I supply only the objective function, not
derivatives), but that's minor compared to having the parameter
values.

> but then I have only once encountered someone 
> running a single optimization for over a week: there normally are ways to 
> speed things up.

I certainly hope so.  However, the problem size is likely to remain
large.

In answer to the other question about using OS checkpointing
facilities, I haven't tried them since the application will be running
on a cluster.  More precisely, the optimization will be driven from a
single machine, but the calculation of the objective function will be
distributed.  So checkpointing at the level of the optimization
function is a good fit to my needs.  There are some cluster OS's that
provide a kind of unified process space across the processors (scyld,
mosix), but we're not using them and checkpointing them is an unsolved
problem.  At least, it was unsolved a couple of years ago when I
looked into it.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] checkpointing

2006-01-06 Thread Ross Boylan
Here's some code I put together for checkpointing a function being
optimized. Hooking directly into optim would require modifying its C
code, so this seemed the easiest route.  I've wanted more information on
the iterations than is currently provided, so this stuff some info back
in the calling environment (by default).

# wrapper to do checkpointing

# Ross Boylan [EMAIL PROTECTED]
# 06-Jan-2006
# (C) 2006 Regents of University of California
# Distributed under the Gnu Public License v2 or later at your option

# If you want to checkpoint the optimization of a function f
# Use checkpoint(f) instead.  See below for other possible arguments.

# default operation for checkpoint(fnfoo) is to record the iterations
# in fnfoo.trace in the calling environment

# WARNING: Any existing variable with name in argument name
# will be deleted from the indicated frame
checkpoint <- function(f,
   name = paste(substitute(f), ".trace", sep=""),
   fileName = substitute(f),
   nCalls = 1,
   nTime = 60*15,
   frame = parent.frame()) {
  # f is the objective function
  # frame is where to put the variable name
  # name will be a data.frame with rows containing
  #   iteration, time, value, parameters
  # fileName is the stem of the name to save for checkpointing
  #  saving will alternate between files with 0 and 1 appended
  # Saving to disk will happen every nCalls or nTime seconds,
  # whichever comes first
  if (exists(name, where=frame))
  rm(list=name, pos=frame)
  ckpt.lastSave <- 0 # alternate 0/1 for file to write to
  ckpt.lastTime <- Sys.time()  # last time saved
  function(params, ...) {
p <- as.list(params)
names(p) <- seq(length(params))
if (exists(name, where=frame, inherits=FALSE)) {
  progress <- get(name, pos=frame)
  progress <- rbind(progress,
data.frame(row.names=dim(progress)[1]+1,
time=Sys.time(),
val=NA, p), deparse.level=0)
} else
progress <- data.frame(row.names=1, time=Sys.time(), val=NA, p)
n <- dim(progress)[1]
# write to disk
if (n%%nCalls == 0 || progress[n, 1]- ckpt.lastTime > nTime) {
  ckpt.lastSave <<- (ckpt.lastSave+1) %% 2
  save(progress, file=paste(fileName, ckpt.lastSave, sep=""))
  ckpt.lastTime <<- progress[n, 1]
}
v <- f(params, ...)
progress[n, 2] <- v
assign(name, progress, pos=frame)
v
  }
}

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] an unpleasant interaction of environments and generic functions

2006-01-31 Thread Ross Boylan
I've run into an unpleasant oddity involving the interaction of
environments and generic functions.  I want to check my diagnosis, and
see if there is a good way to avoid the problem.

Problem:
A library defines
"foo" <- function(object) 1
setMethod("foo", c("matrix"), function(object) 30)

After loading the library
foo(0) is 1
foo(matrix()) is 30
foo is a generic

The source the file with the code given above.
Now 
foo(matrix()) is 1
foo is a function
(Also, there is no "creating generic function" message).

Diagnosis:
The library creates foo and related generics in package:libname.
The source for the initial definition puts the name and function
definition in .GlobalEnv.
The source'd setMethod finds the existing generic in package:libname and
updates it (I'm not sure about this last step).
foo then discovers the foo in .GlobalEnv, not the generic, so the
generic and the alternate methods are hidden.

In case you're wondering, I found this out because I was experimenting
with a library, changing the R and not the C code.  I get sick of doing
R CMD INSTALL with each iteration, but needed to load the library to get
the .so file.

So, is my diagnosis correct?

Any suggestions about how to avoid this problem?
Maybe sys.source("file", 2)... Seems to work!
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] an unpleasant interaction of environments and generic functions

2006-01-31 Thread Ross Boylan
On Tue, 2006-01-31 at 15:21 -0800, Ross Boylan wrote:

> Any suggestions about how to avoid this problem?
> Maybe sys.source("file", 2)... Seems to work!
Not quite.  The original versions of some stuff from the library hung
around, and my efforts to delete them led to more difficulties.

Specifically, after loading the library I deleted the names of the
generics and classes from the library's frame.  Then I read the source
files into the frame.  This time it complained there was no function by
the appropriate name when I tried to do setMethod, even though the
previous file should have created it.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 documentation

2006-02-07 Thread Ross Boylan
1. promptClass generated a file that included
\section{Methods}{
No methods defined with class "mspathDistributedCalculator" in the
signature.
}
Yet there are such methods.  Is this a not-working yet feature, or is
something funny going on (maybe I have definitions in the library and in
the global workspace...)?

2. Is the \code{\link{myS4class-class}} the proper way to
cross-reference a class?  \code{\link{myS4method-method}} the right way
to refer to a method?  I looked for something like \class or \method to
correspond to \code, but didn't see anything.

3.  This question goes beyond documentation.  I have been approaching
things like this:
setClass("A", )
foo <- function(a) 
setClass("B", ...)
setMethod("foo", "B", )
so the first foo turns into the default function for the generic.

This was primarily motivated by discovering that
setMethod("foo", "A") where I have the first function definition
produced an error.

Is this a reasonable way to proceed?

Then do I document the generic with standard function documentation for
foo?  Are there some examples I could look at?  When I want to refer to
the function generically, how do I do that?

I'm using R 2.2.1, and I've found the documentation on documenting S4 a
bit too brief (even after looking at the recommended links, which in
some cases don't have much on documentation).  Since the docs say it's a
work in progress, I'm hoping to get the latest word here.

Thanks.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Makevars and Makeconf sequencing

2013-08-23 Thread Ross Boylan
http://cran.r-project.org/doc/manuals/R-exts.html#Configure-and-cleanup
near the start of 1.2.1 Using Makevars says

> There are some macros which are set whilst configuring the building of
> R itself and are stored in R_HOME/etcR_ARCH/Makeconf. That makefile is
> included as a Makefile after Makevars[.win], and the macros it defines
> can be used in macro assignments and make command lines in the latter.
> 
I'm confused.  If Makeconf is included after Makevars, then how can
Makevars use macros defined in Makeconf?

Or is the meaning only that a variable definition in Makeconf can be
overridden in Makevars?

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] C++ debugging help needed

2013-10-02 Thread Ross Boylan
On Wed, Oct 02, 2013 at 11:05:19AM -0400, Duncan Murdoch wrote:

>> Up to entry #4 this all looks normal.  If I go into that stack frame, I
>> see this:
>>
>>
>> (gdb) up
>> #4  Shape::~Shape (this=0x15f8760, __in_chrg=) at
>> Shape.cpp:13
>> warning: Source file is more recent than executable.

That warning looks suspicious.  Are your sure gdb is finding the right
source files, and that the object code has been built from them?

>> 13blended(in_material.isTransparent())
>> (gdb) p this
>> $9 = (Shape * const) 0x15f8760
>> (gdb) p *this
>> $10 = {_vptr.Shape = 0x72d8e290, mName = 6, mType = {
>>   static npos = ,
>>   _M_dataplus = {> =
>> {<__gnu_cxx::new_allocator> =
>> {}, },
>> _M_p = 0x7f7f7f7f > bounds>}},
>> mShapeColor = {mRed = -1.4044474254567505e+306,
>>   mGreen = -1.4044477603031902e+306, mBlue = 4.24399170841135e-314,
>>   mTransparent = 0}, mSpecularReflectivity = 0.0078125,
>> mSpecularSize = 1065353216, mDiffuseReflectivity = 0.007812501848093234,
>> mAmbientReflectivity = 0}
>>
>> The things displayed in *this are all wrong.  Those field names come
>> from the Shape object in the igraph package, not the Shape object in the
>> rgl package.   The mixOmics package uses both.
>>
>> My questions:
>>
>> - Has my code somehow got mixed up with the igraph code, so I really do
>> have a call out to igraph's Shape::~Shape instead of rgl's
>> Shape::~Shape, or is this just bad info being given to me by gdb?
>>

I don't know, but I think it's possible to give fully qualified type
names to gdb to force it to use the right definition.  That's assuming
that both Shape's are in different namespaces.  If they aren't, that's
likely the problem.

>> - If I really do have calls to the wrong destructor in there, how do I
>> avoid this?

Are you invoking the destructor explicitly?  An object should know
it's type, which should result in the right call without much effort.

>>
>> Duncan Murdoch
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] C++ debugging help needed

2013-10-02 Thread Ross Boylan
On Wed, 2013-10-02 at 16:15 -0400, Duncan Murdoch wrote:
> On 02/10/2013 4:01 PM, Ross Boylan wrote:
> > On Wed, Oct 02, 2013 at 11:05:19AM -0400, Duncan Murdoch wrote:
> > 
> > >> Up to entry #4 this all looks normal.  If I go into that stack frame, I
> > >> see this:
> > >>
> > >>
> > >> (gdb) up
> > >> #4  Shape::~Shape (this=0x15f8760, __in_chrg=) at
> > >> Shape.cpp:13
> > >> warning: Source file is more recent than executable.
> >
> > That warning looks suspicious.  Are your sure gdb is finding the right
> > source files, and that the object code has been built from them?
> 
> I'm pretty sure that's a warning about the fact that igraph also has a 
> file called Shape.cpp, and the Shape::~Shape destructor was in that 
> file, not in my Shape.cpp file.

I guess the notion of the "right" source file is ambiguous in this
context.  Suppose you have projects A and B each defining a function f
in f.cpp.  Use A/f() to mean the binary function defined in project A,
found in source A/f.cpp.

The you have some code that means to invoke A/f() but gets B/f()
instead.  Probably gdb should associate this with B/f.cpp, but your
intent was A/f() and A/f.cpp.  If gdb happens to find A/f.cpp, and A was
build after B, that could provoke the warning shown.

> >
> > >> 13blended(in_material.isTransparent())
> > >> (gdb) p this
> > >> $9 = (Shape * const) 0x15f8760
> > >> (gdb) p *this
> > >> $10 = {_vptr.Shape = 0x72d8e290, mName = 6, mType = {
> > >>   static npos = ,
> > >>   _M_dataplus = {> =
> > >> {<__gnu_cxx::new_allocator> =
> > >> {}, },
> > >> _M_p = 0x7f7f7f7f  > >> bounds>}},
> > >> mShapeColor = {mRed = -1.4044474254567505e+306,
> > >>   mGreen = -1.4044477603031902e+306, mBlue = 4.24399170841135e-314,
> > >>   mTransparent = 0}, mSpecularReflectivity = 0.0078125,
> > >> mSpecularSize = 1065353216, mDiffuseReflectivity = 
> > >> 0.007812501848093234,
> > >> mAmbientReflectivity = 0}
> > >>
> > >> The things displayed in *this are all wrong.  Those field names come
> > >> from the Shape object in the igraph package, not the Shape object in the
> > >> rgl package.   The mixOmics package uses both.
> > >>
> > >> My questions:
> > >>
> > >> - Has my code somehow got mixed up with the igraph code, so I really do
> > >> have a call out to igraph's Shape::~Shape instead of rgl's
> > >> Shape::~Shape, or is this just bad info being given to me by gdb?
> > >>
> >
> > I don't know, but I think it's possible to give fully qualified type
> > names to gdb to force it to use the right definition.  That's assuming
> > that both Shape's are in different namespaces.  If they aren't, that's
> > likely the problem.
> 
> Apparently they aren't, even though they are in separately compiled and 
> linked packages.  I had been assuming that the fact that rgl knows 
> nothing about igraph meant I didn't need to worry about it. (igraph does 
> list rgl in its "Suggests" list.)  On platforms other than Linux, I 
> don't appear to need to worry about it, but Linux happily loads one, 
> then loads the other and links the call to the wrong .so rather than the 
> local one, without a peep of warning, just an eventual crash.

While various OS's and tricks could provide work-arounds for clashing
function definitions (I actually had the impression the R dynamic
loading machinery might) those wouldn't necessary be right.  In
principle package A might use some functions defined in package B.  In
that case the need for namespaces would have become obvious.

> 
> Supposing I finish my editing of the 100 or so source files and put all 
> of the rgl stuff in an "rgl" namespace, that still doesn't protect me 
> from what some other developer might do next week, creating their own 
> "rgl" namespace with a clashing name.   Why doesn't the linking step 
> resolve the calls, why does it leave it until load time?

I think there is a using namespace directive that might save typing,
putting everything into that namespace by default.  Maybe just the
headers need it.

With dynamic loading you don't know til load time if you've got a
problem.  As I said, the systemm can't simply wall if different
libraries, since they may want to call each other.

The usual solution for two developers picking the same 

[Rd] 2 versions of same library loaded

2014-03-12 Thread Ross Boylan
Can anyone help me understand how I got 2 versions of the same library
loaded, how to prevent it, and what the consequences are?  Running under
Debian GNU/Linux squeeze.

lsof and /proc/xxx/map both show 2 copies of several libraries loaded:
/home/ross/install/lib/libmpi.so.1.3.0
/home/ross/install/lib/libopen-pal.so.6.1.0
/home/ross/install/lib/libopen-rte.so.7.0.0
/home/ross/Rlib-3.0.1/Rmpi/libs/Rmpi.so
/usr/lib/openmpi/lib/libmpi.so.0.0.2
/usr/lib/openmpi/lib/libopen-pal.so.0.0.0
/usr/lib/openmpi/lib/libopen-rte.so.0.0.0
/usr/lib/R/lib/libR.so


The system has the old version of MPI installed under /usr/lib.  I built
a personal, newer copy in my directory, and then rebuilt Rmpi (an R
package) against it.  ldd on the personal Rmpi.so and libmpi.so shows
all references to mpi libraries on personal paths.

R was installed from a debian package, and presumably compiled without
having MPI around.  Before running I set LD_LIBRARY_PATH to
~/install/lib, and then stuffed the same path at the start of
LD_LIBRARY_PATH using Sys.setenv in my profile because R seems to
prepend some libraries to that path when it starts (I'm curious about
that too).  I also prepended ~/install/bin to my path, though I'm not
sure that's relevant.

Does R use ld.so or some other mechanism for loading libraries?

Can I assume the highest version number of a library will be preferred?
http://cran.r-project.org/doc/manuals/r-devel/R-exts.html#index-Dynamic-loading 
says "If a shared object/DLL is loaded more than once the most recent version 
is used."  I'm not sure if "most recent" means the one loaded most recently by 
the program (I don't know which that is) or "highest version number."

Why is /usr/lib/openmpi being looked at in the first place?

How can I stop the madness?  Some folks on the openmpi list have
indicated I need to rebuild R, telling it where my MPI is, but that
seems an awfully big hammer for the problem.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 2 versions of same library loaded

2014-03-12 Thread Ross Boylan
Comments/questions interspersed below.
On Wed, 2014-03-12 at 22:50 -0400, Simon Urbanek wrote:
> Ross,
> 
> On Mar 12, 2014, at 5:34 PM, Ross Boylan  wrote:
> 
> > Can anyone help me understand how I got 2 versions of the same library
> > loaded, how to prevent it, and what the consequences are?  Running under
> > Debian GNU/Linux squeeze.
> > 
> > lsof and /proc/xxx/map both show 2 copies of several libraries loaded:
> > /home/ross/install/lib/libmpi.so.1.3.0
> > /home/ross/install/lib/libopen-pal.so.6.1.0
> > /home/ross/install/lib/libopen-rte.so.7.0.0
> > /home/ross/Rlib-3.0.1/Rmpi/libs/Rmpi.so
> > /usr/lib/openmpi/lib/libmpi.so.0.0.2
> > /usr/lib/openmpi/lib/libopen-pal.so.0.0.0
> > /usr/lib/openmpi/lib/libopen-rte.so.0.0.0
> > /usr/lib/R/lib/libR.so
> > 
> > 
> > The system has the old version of MPI installed under /usr/lib.  I built
> > a personal, newer copy in my directory, and then rebuilt Rmpi (an R
> > package) against it.  ldd on the personal Rmpi.so and libmpi.so shows
> > all references to mpi libraries on personal paths.
> > 
> > R was installed from a debian package, and presumably compiled without
> > having MPI around.  Before running I set LD_LIBRARY_PATH to
> > ~/install/lib, and then stuffed the same path at the start of
> > LD_LIBRARY_PATH using Sys.setenv in my profile because R seems to
> > prepend some libraries to that path when it starts (I'm curious about
> > that too).  I also prepended ~/install/bin to my path, though I'm not
> > sure that's relevant.
> > 
> > Does R use ld.so or some other mechanism for loading libraries?
> > 
> 
> R uses dlopen to load package libraries - it is essentially identical to 
> using ld.so for dependencies.
> 
> 
> > Can I assume the highest version number of a library will be preferred?
> 
> No.
> 
Bummer.  The fact that Rmpi is not crashing suggests to me it's using
the right version of the mpi libraries (it does produce lots of errors
if I run it without setting LD_LIBRARY_PATH so only the system mpi libs
are in play), but it would be nice to be certain.  Or the 2 versions
could be combined in a big mess.
> 
> > http://cran.r-project.org/doc/manuals/r-devel/R-exts.html#index-Dynamic-loading
> >  says "If a shared object/DLL is loaded more than once the most recent 
> > version is used."  I'm not sure if "most recent" means the one loaded most 
> > recently by the program (I don't know which that is) or "highest version 
> > number."
> > 
> 
> The former - whichever you load last wins. Note, however, that this refers to 
> explicitly loaded objects since they are loaded into a flat namespace so a 
> load will overwrite all symbols that get loaded.
It might be good to clarify that in the manual.

If I understand the term, the mpi libraries are loaded implicitly; that
is, Rmpi.so is loaded explicitly, and then it pulls in dependencies.
What are the rules in that case? 

> 
> 
> > Why is /usr/lib/openmpi being looked at in the first place?
> > 
> 
> You'll have to consult your system. The search path (assuming rpath is not 
> involved) is governed by LD_LIBRARY_PATH and /etc/ld.so.conf*. Note that 
> LD_LIBRARY_PATH is consulted at the time of the resolution (when the library 
> is looked up), so you may be changing it too late. Also note that you have to 
> expand ~ in the path (it's not a valid path, it's a shell expansion feature).
> 
I just used the ~ as a shortcut; the shell expanded it and the full path
ended up in the variable.

I assume the loader checks LD_LIBRARY_PATH first; once it finds the mpi
libraries there I don't know why it keeps looking.

I'm not sure I follow the part about too late, but is it this?: all the
R's launched under MPI have the MPI library loaded automatically.   If
that happens before my profile is read, reseting LD_LIBRARY_PATH will be
irrelevant.  I don't know whether the profile or Rmpi load happens
first.

The reseting is just a reordering, and since the  other elements in
LD_LIBRARY_PATH don't have any mpi libraries I don't think the order
matters.


> R's massaging of the LD_LIBRARY_PATH is typically done in $R_HOME/etc/ldpaths 
> so you may want to check it and/or adjust it as needed. Normally (in stock 
> R), it only prepends its own libraries and Java so it should not be causing 
> any issues, but you may want to check in case Debian scripts add anything 
> else.
> 
The extra paths are limited as you describe, and so are probably no
threat for loading the wrong MPI library
(/usr/lib64/R/lib:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server).
> 
> > How can I stop th

Re: [Rd] 2 versions of same library loaded

2014-03-13 Thread Ross Boylan
On Thu, 2014-03-13 at 10:46 -0700, Ross Boylan wrote:
> 1. My premise that R had no references to mpi was incorrect.  The logs
> show  
> 24312: file=libmpi.so.1 [0];  needed
> by /home/ross/Rlib-3.0.1/Rmpi/libs/Rmpi.so [0]
> 24312: find library=libmpi.so.1 [0]; searching
> 24312:  search path=/usr/lib64/R/lib:/home/ross/install/lib
> (LD_LIBRARY_PATH)
> 24312:   trying file=/usr/lib64/R/lib/libmpi.so.1
> 24312:   trying file=/home/ross/install/lib/libmpi.so.1

Except there is no file /usr/lib64/R/lib/libmpi.so.1

>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 2 versions of same library loaded

2014-03-13 Thread Ross Boylan
On Thu, 2014-03-13 at 10:46 -0700, Ross Boylan wrote:
> 
> It seems very odd that the same Rmpi.so is requiring both the old and
> new libmpi.so (compare to the first 
> trace in in point 1).  There is this code in Rmpi.c:
> if (!dlopen("libmpi.so.0", RTLD_GLOBAL | RTLD_LAZY)
> && !dlopen("libmpi.so", RTLD_GLOBAL | RTLD_LAZY)){
> 
> 
> So I'm still not sure what it's using, or if there is some mishmash of
> the 2. 

There is an explanation for the explicit load in the Changelog:
2007-10-24, version 0.5-5:

dlopen has been used to load libmpi.so explicitly. This is mainly useful
for Rmpi under OpenMPI where one might see many error messages:
mca: base: component_find: unable to open osc pt2pt: file not found
(ignored) if libmpi.so is not loaded with RTLD_GLOBAL flag.
http://www.stats.uwo.ca/faculty/yu/Rmpi/changelogs.htm

There is another interesting note about openmpi:
It looks like that the option --disable-dlopen is not necessary to
install Open MPI 1.6, at least on Debian. This might be R's .onLoad
correctly loading dynamic libraries and Open MPI is not required to be
compiled with static libraries enabled.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Does R ever move objecsts in memory?

2014-03-16 Thread Ross Boylan
R objects can disappear if they are garbage collected; can they move,
i.e., change their location in memory?

I don't see any indication this might happen in "Writing R Extensions"
or "R Internals".  But I'd like to be sure.

Context: Rmpi serializes objects in raw vectors for transmission by
mpi.  Some send operations (isend) return before transmission is
complete and so need the bits to remain untouched until transmission
completes.  If a preserve a reference to the raw vector in R code that
will prevent it from being garbage collected, but if it gets moved
that would invalidate the transfer.

I was just using the blocking sends to avoid this problem, but the
result is significant delays.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] modifying data in a package

2014-03-19 Thread Ross Boylan
I've tweaked Rmpi and want to have some variables that hold data in the
package.  One of the R files starts
mpi.isend.obj <- vector("list", 500) #mpi.request.maxsize())
  
mpi.isend.inuse <- rep(FALSE, 500) #mpi.request.maxsize())

and then functions update those variables with <<-.  When run:
  Error in mpi.isend.obj[[i]] <<- .force.type(x, type) :

  cannot change value of locked binding for 'mpi.isend.obj'

I'm writing to ask the proper way to accomplish this objective (getting
a variable I can update in package namespace--or at least somewhere
useful and hidden from the outside).

I think the problem is that the package namespace is locked.  So how do
I achieve the same effect?
http://www.r-bloggers.com/package-wide-variablescache-in-r-packages/
recommends creating an environment and then updating it.  Is that the
preferred route?  (It seems odd that the list should be locked but the
environment would be manipulable.  I know environments are special.)

The comments indicate that 500 "should" be mpi.request.maxsize().  That
doesn't work because mpi.request.maxsize calls a C function, and there
is an error that the function isn't loaded.  I guess the R code is
evaluated before the C libraries are loaded. The packages zzz.R starts
.onLoad <- function (lib, pkg) {
library.dynam("Rmpi", pkg, lib)

So would moving the code into .onLoad after that work?  In that case,
how do I get the environment into the  proper scope?  Would
 .onLoad <- function (lib, pkg) {
library.dynam("Rmpi", pkg, lib)
assign("mpi.globals", new.env(), environment(mpi.isend))
assign("mpi.isend.obj", vector("list", mpi.request.maxsize(),
mpi.globals)
work?

mpi.isend is a function in Rmpi.  But I'd guess the first assign will
fail because the environment is locked.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] modifying data in a package [a solution]

2014-03-19 Thread Ross Boylan
On Wed, 2014-03-19 at 19:22 -0700, Ross Boylan wrote:
> I've tweaked Rmpi and want to have some variables that hold data in the
> package.  One of the R files starts
> mpi.isend.obj <- vector("list", 500) #mpi.request.maxsize())  
> 
> mpi.isend.inuse <- rep(FALSE, 500) #mpi.request.maxsize())
> 
> and then functions update those variables with <<-.  When run:
>   Error in mpi.isend.obj[[i]] <<- .force.type(x, type) :  
>   
>   cannot change value of locked binding for 'mpi.isend.obj'
> 
> I'm writing to ask the proper way to accomplish this objective (getting
> a variable I can update in package namespace--or at least somewhere
> useful and hidden from the outside).
> 
I've discovered one way to do it:
In one of the regular R files
mpi.global <- new.env()

Then at the end of .onLoad in zzz.R:
assign("mpi.isend.obj", vector("list", mpi.request.maxsize()),
mpi.global)
and similary for the logical vector mpi.isend.inuse

Access with functions like this:
## Next 2 functions have 3 modes
  
##  foo()  returns foo from mpi.global  
  
##  foo(request) returns foo[request] from mpi.global   
  
##  foo(request, value) set foo[request] to value   
  
mpi.isend.inuse <- function(request, value) {
if (missing(request))
return(get("mpi.isend.inuse", mpi.global))
i <- request+1L
parent.env(mpi.global) <- environment()
if (missing(value))
return(evalq(mpi.isend.inuse[i], mpi.global))
return(evalq(mpi.isend.inuse[i] <- value, mpi.global))
}

# request, if present, must be a single value   
  
mpi.isend.obj <- function(request, value){
if (missing(request))
return(get("mpi.isend.obj", mpi.global))
i <- request+1L
parent.env(mpi.global) <- environment()
if (missing(value))
return(evalq(mpi.isend.obj[[i]], mpi.global))
return(evalq(mpi.isend.inuse[[i]] <- value, mpi.global))
}

This is pretty awkward; I'd love to know a better way.  Some of the
names probably should change too: mpi.isend.obj() sounds too much as if
it actually sends something, like mpi.isend.Robj().

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] modifying a persistent (reference) class

2014-08-01 Thread Ross Boylan
I saved objects that were defined using several reference classes.
Later I modified the definition of reference classes a bit, creating new
functions and deleting old ones.  The total number of functions did not
change.  When I read them back I could only access some of the original
data.

I asked on the user list and someone suggested sticking with the old
class definitions, creating new classes, reading in the old data, and
converting it to the new classes.  This would be awkward (I want the
"new" classes to have the same name as the "old" ones), and I can
probably just leave the old definitions and define the new functions I
need outside of the reference classes.

Are there any better alternatives?

On reflection, it's a little surprising that changing the code for a
reference class makes any difference to an existing instance, since all
the function definitions seem to be attached to the instance.  One
problem I've had in the past was precisely that redefining a method in a
reference class did not change the behavior of existing instances.  So
I've tried to follow the advice to keep the methods light-weight.

In this case I was trying to move from a show method (that just printed)
to a summary method that returned a summary object.  So I wanted to add
a summary method and redefine the show to call summary in the base
class, removing all the subclass definitions of show.

Regular S4 classes are obviously not as sensitive since they usually
don't include the functions that operate on them, but I suppose if you
changed the slots you'd be in similar trouble.

Some systems keep track of versions of class definitions and allow one
to write code to migrate old to new forms automatically when the data
are read in.  Does R have anything like that?

The system on which I encountered the problems was running R 2.15.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] modifying a persistent (reference) class

2014-08-01 Thread Ross Boylan
On Fri, 2014-08-01 at 14:42 -0400, Brian Lee Yung Rowe wrote:
> Ross,
> 
> 
> This is generally a hard problem in software systems. The only
> language I know that explicitly addresses it is Erlang. Ultimately you
> need a system upgrade process, which defines how to update the data in
> your system to match a new version of the system. You could do this by
> writing a script that 
> 1) loads the old version of your library
> 2) loads your data/serialized reference classes
> 3) exports data to some intermediate format (eg a list)
> 4) loads new version of library
> 5) imports data from intermediate format
My recollection is that in Gemstone's smalltalk database you can define
methods associated with a class that describe how to change an instance
from one version to another.  You also have the choice of upgrading all
persistent objects at once or doing so lazily, i.e., as they are
retrieved.

The brittleness of the representation depends partly on the details.  If
a class has 2 slots, a and b, and the only thing on disk is the contents
of a and the contents of b, almost any change will screw things up.
However, if the slot name is persisted with the instance it's much
easier to reconstruct the instance of the class changes (if slot c is
added and not on disk, set it to nil; if b is removed, throw it out when
reading from disk).  Once could also persist the class definition, or
key elements of it, with individual instances referring to the
definition.

I don't know which, if any of these strategies, R uses for reference or
other classes.
> 
> 
> Once you've gone through the upgrade process, arguably it's better to
> persist the data in a format that is decoupled from your objects since
> then future upgrades would simply be
> 1) load new library
> 2) import data from intermediate format
Arguably :)  As I said, some representations could do this
automatically.  And there are still issues such as a change in the type
of a slot, or rules for filling new slots, that would require
intervention.

In my experience with other object systems, usually methods are
attributes of the class.  For R reference classes they appear to be
attributes of the instance, potentially modifiable on a per-instance
basis.

Ross
> 
> 
> which is no different from day-to-day operation of your app/system (ie
> you're always writing to and reading from the intermediate format). 
> 
> 
> Warm regards,
> Brian
> 
> •
> Brian Lee Yung Rowe
> Founder, Zato Novo
> Professor, M.S. Data Analytics, CUNY
> 
> On Aug 1, 2014, at 1:54 PM, Ross Boylan  wrote:
> 
> 
> > I saved objects that were defined using several reference classes.
> > Later I modified the definition of reference classes a bit, creating
> > new
> > functions and deleting old ones.  The total number of functions did
> > not
> > change.  When I read them back I could only access some of the
> > original
> > data.
> > 
> > I asked on the user list and someone suggested sticking with the old
> > class definitions, creating new classes, reading in the old data,
> > and
> > converting it to the new classes.  This would be awkward (I want the
> > "new" classes to have the same name as the "old" ones), and I can
> > probably just leave the old definitions and define the new functions
> > I
> > need outside of the reference classes.
> > 
> > Are there any better alternatives?
> > 
> > On reflection, it's a little surprising that changing the code for a
> > reference class makes any difference to an existing instance, since
> > all
> > the function definitions seem to be attached to the instance.  One
> > problem I've had in the past was precisely that redefining a method
> > in a
> > reference class did not change the behavior of existing instances.
> >  So
> > I've tried to follow the advice to keep the methods light-weight.
> > 
> > In this case I was trying to move from a show method (that just
> > printed)
> > to a summary method that returned a summary object.  So I wanted to
> > add
> > a summary method and redefine the show to call summary in the base
> > class, removing all the subclass definitions of show.
> > 
> > Regular S4 classes are obviously not as sensitive since they usually
> > don't include the functions that operate on them, but I suppose if
> > you
> > changed the slots you'd be in similar trouble.
> > 
> > Some systems keep track of versions of class definitions and allow
> > one
> > to write code to migrate old to new forms automatically when the
> > data
> > are read in.  Does R have anything like that?
> > 
> > The system on which I encountered the problems was running R 2.15.
> > 
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> > 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] modifying a persistent (reference) class

2014-08-01 Thread Ross Boylan
On Fri, 2014-08-01 at 16:06 -0400, Brian Lee Yung Rowe wrote:
> Ross,
> 
> 
> Ah I didn't think about Smalltalk. Doesn't surprise me that they
> supported upgrades of this sort. That aside I think the question is
> whether it's realistic for a language like R to support such a
> mechanism automatically. Smalltalk and Erlang both have tight
> semantics that would be hard to establish in R (given the multiple
> object systems and dispatching systems). 
> 
> 
> I'm a functional guy so to me it's natural to separate the data from
> the functions/methods. Having spent years writing OOP code I walked
> away concluding that OOP makes things more complicated for the sake of
> being OOP (eg no first class functions). 
In smalltalk everything is an object, and that includes functions,
including class methods.
> Obviously that's changing, and in a language like R it's less of an
> issue. However, something like object serialization smells
> suspiciously similar. If you know that serializing objects is brittle,
> why not look for an alternative approach as opposed to chasing that
> rainbow?
My immediate problem is/was that I have serialized objects representing
weeks of CPU time.  I have to work with them, not some other
representation they might have.  And it's much more natural to work with
R's native persistence than some other scheme I cook up.

I think persistence requires serialization.  The serialization can be
more or less brittle, but I don't think there is an alternative to
serialization.

Since I just worked around my immediate problem a few minutes ago (by
retaining the original class definitions and using setMethod to create
summary methods), my interests are a bit more theoretical.

First, I'd like to understand more about exactly what is saved to disk
for reference and other classes, in particular how much meta-information
they contain.  And my mental model for reference class persistence is
clearly wrong, because in that model instances based on old definitions
come back intact (albeit not with the new method definitions or other
new slots), whereas mine seemed to come back damaged.

Second, I'm still hoping for some elegant way around this problem (how
to redefine classes and still use saved versions from older definitions)
for the future, both with reference and regular classes.  Or at least
some rules about what changes, if any, are safe to make in class
definitions after an instance has been persisted.
> 
Third, if changes to R could make things better, I'm hoping some
developers might take them up.  I realize that is unlikely to happen,
for many good reasons, but I can still hope :)

Ross
> 
> Warm regards,
> Brian
> 
> •
> Brian Lee Yung Rowe
> Founder, Zato Novo
> Professor, M.S. Data Analytics, CUNY
> 
> On Aug 1, 2014, at 3:33 PM, Ross Boylan  wrote:
> 
> 
> > On Fri, 2014-08-01 at 14:42 -0400, Brian Lee Yung Rowe wrote:
> > > Ross,
> > > 
> > > 
> > > This is generally a hard problem in software systems. The only
> > > language I know that explicitly addresses it is Erlang. Ultimately
> > > you
> > > need a system upgrade process, which defines how to update the
> > > data in
> > > your system to match a new version of the system. You could do
> > > this by
> > > writing a script that 
> > > 1) loads the old version of your library
> > > 2) loads your data/serialized reference classes
> > > 3) exports data to some intermediate format (eg a list)
> > > 4) loads new version of library
> > > 5) imports data from intermediate format
> > My recollection is that in Gemstone's smalltalk database you can
> > define
> > methods associated with a class that describe how to change an
> > instance
> > from one version to another.  You also have the choice of upgrading
> > all
> > persistent objects at once or doing so lazily, i.e., as they are
> > retrieved.
> > 
> > The brittleness of the representation depends partly on the
> > details.  If
> > a class has 2 slots, a and b, and the only thing on disk is the
> > contents
> > of a and the contents of b, almost any change will screw things up.
> > However, if the slot name is persisted with the instance it's much
> > easier to reconstruct the instance of the class changes (if slot c
> > is
> > added and not on disk, set it to nil; if b is removed, throw it out
> > when
> > reading from disk).  Once could also persist the class definition,
> > or
> > key elements of it, with individual instances referring to the
> > definition.
> > 
> > I don't know which, if any of these strateg

[Rd] S4 class redefinition

2009-06-30 Thread Ross Boylan
I haven't found much on S4 class redefinition; the little I've seen
indicates the following is to be expected:
1. setClass("foo", )
2. create objects of class foo.
3. execute the same setClass("foo", ...) again (source the same file).
4. objects from step 2 are now NULL.

Is that the expected behavior (I ran under R 2.7.1)?

Assuming it is, it's kind of unfortunate.  I can wrap my setClass code
like this
if (! isClass("foo")) setClass("foo", )
to avoid this problem.  I've seen this in other code; is that the
standard solution? 

I thought that loading a library was about the same as executing the
code in its source files (assuming R only code).  But if this were true
my saved objects would be nulled out each time I loaded the
corresponding library.  That does not seem to be the case.  Can anyone
explain that?

Do I need to put any protections around setMethod so that it doesn't run
if the method is already defined?

At the moment I'm not changing the class defintion but am changing the
methods, so I can simply avoid running setClass.

But if I want to change the class, most likely by adding a slot, what do
I do?  At the moment it looks as if I'd need to make a new class name,
define some coerce methods, and then locate and change the relevant
instances.  Is there a better way?

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 class redefinition

2009-06-30 Thread Ross Boylan
On Tue, 2009-06-30 at 12:58 -0700, Ross Boylan wrote:
> I haven't found much on S4 class redefinition; the little I've seen
> indicates the following is to be expected:
> 1. setClass("foo", )
> 2. create objects of class foo.
> 3. execute the same setClass("foo", ...) again (source the same file).
> 4. objects from step 2 are now NULL.
I'm sorry; step 4 is completely wrong.  The objects seem to be
preserved.  Some slightly modified questions remain.

Is it safe to reexecute identical code for setClass or setMethod when
you have existing objects of the class around?

Is there any protection, such as checking for existing definitions, that
is recommended before executing setClass or setMethod?

If you want to change a class or method, and have existing objects, how
do you do that?

Can scoping rules lead to situations in which some functions or methods
end up with references to the older version of the methods?  One example
is relevant to class constructors, and shows they can:

Here's a little test
> trivial <- function() 3  # stand in for a class constructor
> maker <- function(c=trivial)
+   function(x) x+c()
> oldf <- maker()
> oldf(4)
[1] 7
> trivial <- function() 20
> oldf(4)
[1] 7
> newf <- maker()
> newf(8)
[1] 28

So the old definition is frozen in the inner function, for which it was
captured by lexical scope.  Although definition of maker is not redone
after trivial is redefined, maker's default argument does get the new
value of trivial.  

Methods add another layer.  I'm hoping those with a deeper understanding
than mine can clarify where the danger spots are, and how to deal with
them.

Thanks.
Ross
> 
> Is that the expected behavior (I ran under R 2.7.1)?
> 
> Assuming it is, it's kind of unfortunate.  I can wrap my setClass code
> like this
> if (! isClass("foo")) setClass("foo", )
> to avoid this problem.  I've seen this in other code; is that the
> standard solution? 
> 
> I thought that loading a library was about the same as executing the
> code in its source files (assuming R only code).  But if this were true
> my saved objects would be nulled out each time I loaded the
> corresponding library.  That does not seem to be the case.  Can anyone
> explain that?
> 
> Do I need to put any protections around setMethod so that it doesn't run
> if the method is already defined?
> 
> At the moment I'm not changing the class defintion but am changing the
> methods, so I can simply avoid running setClass.
> 
> But if I want to change the class, most likely by adding a slot, what do
> I do?  At the moment it looks as if I'd need to make a new class name,
> define some coerce methods, and then locate and change the relevant
> instances.  Is there a better way?
> 
> Thanks.
> Ross Boylan
> 
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] beginner's guide to C++ programming with R packages?

2009-07-03 Thread Ross Boylan
On Fri, 2009-06-26 at 16:17 -0400, Whit Armstrong wrote:
> > But this draws me back to the basic question.  I don't want to run R
> > CMD INSTALL 20 times per hour.  How do developers "actually" test
> > their code?
> 
> check out RUnit for tests.
> http://cran.r-project.org/web/packages/RUnit/index.html
> 
> as for testing c++ code.  I have taken an approach which is probably
> different than most.  I try to build my package as a c++ library that
> can be used independent of R.  Then you can test your library with
> whatever c++ test suite that you prefer.  Once you are happy, then
I also have C++ tests that operate separately from R, though I have a
very small library of stub R functions to get the thing to build.  There
have been some tricky issues with R (if one links to the regular R
library) and the test framework fighting over who was "main."  I think
that's why I switched to the stub.

Working only with R level tests alone does not permit the kind of lower
level testing that you can get by running your own unit tests.  I use
the boost unit test framework.

Of course, you want R level tests too.  Some of my upper level C++ tests
are mirror images of R tests; this can help identify if a problem lies
at the interface.

For C++ tests I build my code in conjunction with a main program.

I think I also have or had a test building it as a library, but I don't
use that much.

For R, my modules get built into a library.

It's usually cleaner to build the R library from a fresh version of the
sources; otherwise scraps of my other builds tend to end up in the R
package.

Thanks, Whit, for the  pointers to Rcpp and RAbstraction.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] user supplied random number generators

2009-07-29 Thread Ross Boylan
?Random.user says (in svn trunk)
  Optionally,
  functions \code{user_unif_nseed} and \code{user_unif_seedloc} can be
  supplied which are called with no arguments and should return pointers
  to the number of seeds and to an integer array of seeds.  Calls to
  \code{GetRNGstate} and \code{PutRNGstate} will then copy this array to
  and from \code{.Random.seed}.
And it offers as an example
  void  user_unif_init(Int32 seed_in) { seed = seed_in; }
  int * user_unif_nseed() { return &nseed; }
  int * user_unif_seedloc() { return (int *) &seed; }

First question: what is the lifetime of the buffers pointed to by the
user_unif-* functions, and who is responsible for cleaning them up?  In
the help file they are static variables, but in general they might be
allocated on the heap or might be in structures that only persist as
long as the generator does.

Since the example uses static variables, it seems reasonable to conclude
the core R code is not going to try to free them.

Second, are the types really correct?  The documentation seems quite
explicit, all the more so because it uses Int32 in places.  However, the
code in RNG.c (RNG_Init) says

ns = *((int *) User_unif_nseed());
if (ns < 0 || ns > 625) {
warning(_("seed length must be in 0...625; ignored"));
break;
}
RNG_Table[kind].n_seed = ns;
RNG_Table[kind].i_seed = (Int32 *) User_unif_seedloc();
consistent with the earlier definition of RNG_Table entries as
typedef struct {
RNGtype kind;
N01type Nkind;
char *name; /* print name */
int n_seed; /* length of seed vector */
Int32 *i_seed;
} RNGTAB;

This suggests that the type of user_unif_seedloc is Int32*, not int *.
It also suggests that user_unif_nseed should return the number of 32 bit
integers.  The code for PutRNGstate(), for example, uses them in just
that way.

While the dominant model, even on 64 bit hardware, is probably to leave
int as 32 bit, it doesn't seem wise to assume that is always the case.

I got into this because I'm trying to extend the rsprng code; sprng
returns its state as a vector of bytes.  Converting these to a vector of
integers depends on the integer length, hence my interest in the exact
definiton of integer.  I'm interested in lifetime because I believe
those bytes are associated with the stream and become invalid when the
stream is freed; furthermore, I probably need to copy them into a buffer
that is padded to full wordlength.  This means I allocate the buffer
whose address is returned to the core R RNG machinery.  Eventually
somebody needs to free the memory.

Far more of my rsprng adventures are on
http://wiki.r-project.org/rwiki/doku.php?id=packages:cran:rsprng.  Feel
free to read, correct, or extend it.

Thanks.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] user supplied random number generators

2009-07-30 Thread Ross Boylan
On Thu, 2009-07-30 at 12:32 +0200, Christophe Dutang wrote:
> > This suggests that the type of user_unif_seedloc is Int32*, not int
> *.
> > It also suggests that user_unif_nseed should return the number of
> 32  
> > bit
> > integers.  The code for PutRNGstate(), for example, uses them in
> just
> > that way.
> >
> > While the dominant model, even on 64 bit hardware, is probably to  
> > leave
> > int as 32 bit, it doesn't seem wise to assume that is always the
> case.
> 
> You can test the size of an int with a configure script. see for  
> example the package foreign, the package randtoolbox (can be found
> in  
> Rmetrics R forge project) I maintain with Petr Savicky.
http://cran.r-project.org/doc/manuals/R-admin.html#Choosing-between-32_002d-and-64_002dbit-builds
 says "All current versions of R use 32-bit integers".

Also, sizeof(int) works at runtime.  But my question was really about
whether code for user defined RNGs should be written using Int32 or int
as the target type for the state vector.  The R core code suggests to me
one should use Int32, but the documentation says int.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] user supplied random number generators

2009-08-18 Thread Ross Boylan
On Sun, 2009-08-16 at 21:24 +0200, Petr Savicky wrote:
> Dear Ross Boylan:
> 
> Some time ago, you sent an email to R-devel with the following.
> > I got into this because I'm trying to extend the rsprng code; sprng
> > returns its state as a vector of bytes.  Converting these to a vector of
> > integers depends on the integer length, hence my interest in the exact
> > definiton of integer.  I'm interested in lifetime because I believe
> > those bytes are associated with the stream and become invalid when the
> > stream is freed; furthermore, I probably need to copy them into a buffer
> > that is padded to full wordlength.  This means I allocate the buffer
> > whose address is returned to the core R RNG machinery.  Eventually
> > somebody needs to free the memory.
> > 
> > Far more of my rsprng adventures are on
> > http://wiki.r-project.org/rwiki/doku.php?id=packages:cran:rsprng.  Feel
> > free to read, correct, or extend it.
> 
> I am interested to know, what is the current state of your project.
I did figure out some of the lifetime issues; SPRNG does allocate memory
when you ask it for its state.  I also realized that for several reasons
it would not be appropriate to hand that buffer to R.

I've reworked the page extensively since it had the section you quote.
See particularly the "Getting and Setting Stream State" section near the
bottom.

I submitted patches to hook rsprng into R's standard machinery for
stream state (the user visible part of which is .Random.seed).  The
package developer has reservations about applying them.

As a practical matter, I shifted my package's C code to call back to R
to get random numbers.  If rsprng is loaded and activated, my code will
use it.  I also eliminated all attempts to set the seed in my code.  For
rsprng, in its current form, the R set.seed() function is a no-op and
you have to use an rsprng function to set the seed (generally when
activating the library).
> 
> There is a package rngwell19937 with a random number generator, which i 
> develop
> and use for several parallel processes. Setting a seed may be done by a 
> vector,
> one of whose components is the process number. The initialization then 
> provides
> unrelated sequences for different processes.
That sounds interesting; thanks for pointing it out.
> 
> Seeding by a vector is also available in the initialization of Mersenne 
> Twister
> from 2002. See mt19937ar.c (ar for array) at
>   http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html
> Unfortunately, seeding by a vector is not available in R base. R uses
> Mersenne Twister, but with an initialization by a single number.
I think that one could write to .Random.seed directly to set a vector
for many of the generators.  ?.Random.seed does not recommend this and
notes various limits and hazards of this strategy.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] mysteriously persistent generic definition

2009-10-22 Thread Ross Boylan
Originally I made a function yearStop that took an argument "object".  I
made a generic, but later changed the argument to "x".  R keeps
resurrecting the old definition.  Could anyone explain what is going on,
or how to fix it?  Note particularly the end of the transcript below: I
remove the generic, verify that the symbol is undefined, make a new
function, and then make a generic.  But the generic does not use the
argument of the new function definition.


> args(yearStop)
function (obj) 
NULL
> yearStop <- function(x) x...@yearstop
> args(yearStop)
function (x) 
NULL
> setGeneric("yearStop")
[1] "yearStop"
> args(yearStop)
function (obj) 
NULL
> removeGeneric("yearStop")
[1] TRUE
> args(yearStop)
Error in args(yearStop) : object "yearStop" not found
> yearStop <- function(x) x...@yearstop
> setGeneric("yearStop")
[1] "yearStop"
> args(yearStop)
function (obj) 
NULL


R 2.7.1.  I originally read the definitions in from a file with ^c^l in
ESS; however, I typed the commands above by hand.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] mysteriously persistent generic definition

2009-10-28 Thread Ross Boylan
Here's a self-contained example of the problem:

> foo <- function(obj) {return(3);}
> setGeneric("foo")
[1] "foo"
> removeGeneric("foo")
[1] TRUE
> foo <- function(x) {return(4);}
> args(foo)
function (x) 
NULL
> setGeneric("foo")
[1] "foo"
> args(foo)
function (obj) 
NULL

R 2.7.1.  I get the same behavior whether or not I use ESS.

The reason this is more than a theoretical problem:

> setMethod("foo", signature(x="numeric"), function(x) {return(x+4);})
Error in match.call(fun, fcall) : unused argument(s) (x = "numeric")

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] mysteriously persistent generic definition

2009-10-28 Thread Ross Boylan
R 2.8.1 on Windows behaves as I expected, i.e., the final args(foo)
returns a function of x.  The previous example (below) was on Debian
GNU/Linux.


On Wed, 2009-10-28 at 12:14 -0700, Ross Boylan wrote:
> Here's a self-contained example of the problem:
> 
> > foo <- function(obj) {return(3);}
> > setGeneric("foo")
> [1] "foo"
> > removeGeneric("foo")
> [1] TRUE
> > foo <- function(x) {return(4);}
> > args(foo)
> function (x) 
> NULL
> > setGeneric("foo")
> [1] "foo"
> > args(foo)
> function (obj) 
> NULL
> 
> R 2.7.1.  I get the same behavior whether or not I use ESS.
> 
> The reason this is more than a theoretical problem:
> 
> > setMethod("foo", signature(x="numeric"), function(x) {return(x+4);})
> Error in match.call(fun, fcall) : unused argument(s) (x = "numeric")
> 
> Ross 
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] bug in heatmap?

2009-11-05 Thread Ross Boylan
Using R 2.10 on WinXP
heatmap(mymap, Rowv=NA, Colv=NA)
with mymap values of
 0   1  2 3   4
0  NaN 0.0 0.00621118 0.000 NaN
10 0.0 0.01041667 0.125 NaN
20 0.004705882 0.02105263 0.333 NaN
30 0.004081633 0.0222 0.500   0
40 0.0 0.01923077 0.167 NaN
60 0.0 0. 0.000 NaN
10   0 0.002840909 0. 0.000 NaN
20   0 0.002159827 0.   NaN NaN
40 NaN 0.009433962 0.   NaN NaN
(the first row and column are labels, not data)
produces a plot in which all of the row labelled 6 (all 0's and NaN) is
white.  This is the same color showing for the NaN values.

In contrast, all other 0 values appear as dark red.

Have I missed some subtlety, or is this a bug?

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] group generics

2009-11-24 Thread Ross Boylan
I have classes A and B, where B contains A.  In the implementation of
the group generic for B I would like to use the corresponding group
generic for A.  Is there a way to do that?

I would also appreciate any comments if what I'm trying to do seems like
the wrong approach.

Here's a stripped down example:
setClass("A",
 representation=representation(xa="numeric")
 )

setMethod("Arith", signature(e1="numeric", e2="A"), function(e1, e2) {
  new("A", ax=e1...@xa)
}
  )

setClass("B",
 representation=representation(xb="numeric"),
 contains=c("A")
 )

setMethod("Arith", signature(e1="numeric", e2="B"), function(e1, e2) {
# the next line does not work right
  v <- selectMethod("callGeneric", signature=c("numeric", "A"))(e1, e2)
  print(v)
  new("B", v, xb=e1...@xb)
}
)


Results:
> t1 <- new("B", new("A", xa=4), xb=2)
> t1
An object of class “B”
Slot "xb":
[1] 2

Slot "xa":
[1] 4

> 3*t1
Error in getGeneric(f, !optional) : 
  no generic function found for "callGeneric"

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] group generics

2009-11-30 Thread Ross Boylan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



Martin Morgan wrote:
> Hi Ross -- 
> 
> Ross Boylan  writes:
> 
>> I have classes A and B, where B contains A.  In the implementation of
>> the group generic for B I would like to use the corresponding group
>> generic for A.  Is there a way to do that?

>> setMethod("Arith", signature(e1="numeric", e2="B"), function(e1, e2) {
>>  # the next line does not work right
>>   v <- selectMethod("callGeneric", signature=c("numeric", "A"))(e1, e2)
> 
> v <- callGeneric(e1, as(e2, "A"))
> 
> or probably
> 
>v <- callNextMethod(e1, e2)
> 
> Martin

A different error this time, one that looks a lot like the report from
stephen.p...@ubs.com on 2007-12-24 concerning callNextMethod:, except
this is with
callGeneric.

HOWEVER, the problem is erratic; when I started from scratch and took
this code into a workspace and executed the commands, they worked as
expected.  I had various false starts and revisions, as well as the real
code on which the example is based, when the error occurred.  I tried
taking in the real code (which defines generics with Arith from my
actual classes, and which also fails as below), and the example still
worked.


My revised code:

setClass("A",
 representation=representation(xa="numeric")
 )

setMethod("Arith", signature(e1="numeric", e2="A"), function(e1, e2) {
  new("A", xa=callGeneric(e1, e...@xa))
}
 )

setClass("B",
 representation=representation(xb="numeric"),
 contains=c("A")
 )

setMethod("Arith", signature(e1="numeric", e2="B"), function(e1, e2) {
  new("B", xb=e1...@xb, callNextMethod())
}
)

Results:
> options(error=recover)
> tb <- new("B", xb=1:3, new("A", xa=10))
> 3*tb
Error in get(fname, envir = envir) : object '.nextMethod' not found

Enter a frame number, or 0 to exit

 1: 3 * tb
 2: 3 * tb
 3: test.R#16: new("B", xb = e1 * e...@xb, callNextMethod())
 4: initialize(value, ...)
 5: initialize(value, ...)
 6: callNextMethod()
 7: .nextMethod(e1 = e1, e2 = e2)
 8: test.R#6: new("A", xa = callGeneric(e1, e...@xa))
 9: initialize(value, ...)
10: initialize(value, ...)
11: callGeneric(e1, e...@xa)
12: get(fname, envir = envir)

Selection: 0

The callGeneric in frame 11 is trying to get the primitive for
multiplying numeric times numeric.  Quoting from Pope's analysis:
[The primitive...]
> does not get the various "magic" variables such as .Generic, .Method,
> etc. defined in its frame. Thus, callGeneric() fails when, failing to
> find ".Generic" then takes the function symbol for the call (which
> callNextMethod() has constructed to be ".nextMethod") and attempts to
> look it up, which of course also fails, leading to the resulting error
> seen above.

I'm baffled, and hoping someone on the list has an idea.
I'm running R 2.10 under ESS (in particular, I use c-c c-l in the code
file to read in the code) on XP.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAksUTcQACgkQTEwcvZWfjMgEdwCfYt/bmsXG76rq3BpbByBYNjLY
ubsAoKnBnBMbd+OlBL2YOg3vWslL35Zg
=D58x
-END PGP SIGNATURE-

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] group generics

2009-12-03 Thread Ross Boylan
Thanks for your help.  I had two concerns about using as: that it would
impose some overhead, and that it would require me to code an explicit
conversion function.  I see now that the latter is not true; I don't
know if the overhead makes much difference.

On Thu, 2009-12-03 at 13:00 -0800, Martin Morgan wrote:
> setMethod("Arith", signature(e1="numeric", e2="B"), function(e1, e2) {
> new("B", xb=e1...@xb, callGeneric(e1, as(e2, "A")))
> })

Things were getting too weird, so I punted and used explicitly named
function calls for the multiplication operation that was causing
trouble.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] group generics

2009-12-03 Thread Ross Boylan
On Thu, 2009-12-03 at 14:25 -0800, John Chambers wrote:
> I missed the earlier round of this discussion and only am commenting
> now to say that this doesn't seem weird at all, if I understand what
> you're trying to do.
> 
> Martin's basic suggestion,
> v <- callGeneric(e1, as(e2, "A"))
> seems the simplest solution.
> 
> You just want to make another call to the actual generic function,
> with new arguments, and let method selection take place.  In fact,
> it's pretty much the standard way to use group generics.  
> 
> John
There were 2 weird parts.  Mainly I was referring to the fact that
identical code (posted earlier) worked sometimes and not others.  I
could not figure out what the differences were between the 2 scenarios,
nor could I create a non-working scenario reliably.

The second part that seemed weird was that the code looked as if it
should work all the time (the last full version I posted, which used
callNextMethod() rather than callGeneric()).

Finally, I felt somewhat at sea with the group generics, since I wasn't
sure exactly how they worked, how they interacted with primitives, or
how they interacted with callNextMethod, selectMethod, etc.  I did study
what I thought were the relevant help entries.

Ross
> 
> 
> Ross Boylan wrote: 
> > Thanks for your help.  I had two concerns about using as: that it would
> > impose some overhead, and that it would require me to code an explicit
> > conversion function.  I see now that the latter is not true; I don't
> > know if the overhead makes much difference.
> > 
> > On Thu, 2009-12-03 at 13:00 -0800, Martin Morgan wrote:
> >   
> > > setMethod("Arith", signature(e1="numeric", e2="B"), function(e1, e2) {
> > > new("B", xb=e1...@xb, callGeneric(e1, as(e2, "A")))
> > > })
> > > 
> > 
> > Things were getting too weird, so I punted and used explicitly named
> > function calls for the multiplication operation that was causing
> > trouble.
> > 
> > Ross
> > 
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> > 
> >

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] ?setGeneric garbled (PR#14153)

2009-12-17 Thread Ross Boylan
On Thu, 2009-12-17 at 15:24 +0100, Martin Maechler wrote:
> >>>>> Ross Boylan 
> >>>>> on Thu, 17 Dec 2009 02:15:12 +0100 (CET) writes:
> 
> > Full_Name: Ross Boylan
> > Version: 2.10.0
> > OS: Windows XP
> > Submission from: (NULL) (198.144.201.14)
> 
> 
> > Some of the help for setGeneric seems to have been garbled.  In the 
> section
> > "Basic Use", 5th paragraph (where the example counts as a single line 
> 3rd
> > paragraph) it says
> > 
> > Note that calling 'setGeneric()' in this form is not strictly
> > necessary before calling 'setMethod()' for the same function.  If
> > the function specified in the call to 'setMethod' is not generic,
> > 'setMethod' will execute the call to 'setGeneric' itself.
> > Declaring explicitly that you want the function to be generic can
> > be considered better programming style; the only difference in the
> > result, however, is that not doing so produces a You cannot (and
> > never need to) create an explicit generic version of the primitive
> > functions in the base package.
> > 
> 
> > The stuff after the semi-colon of the final sentence is garbled, or at 
> least
> > unparseable by me.  Probably something got deleted by mistake.
> 
> That's very peculiar.
> 
> The corresponding  methods/man/setGeneric.Rd file has not been
> changed in a while,
> but I don't see your problem.

The help from R launched directly from the R shortcut on my desktop
looks fine, in both 2.10 and 2.8.

I closed all my emacs sessions and restarted, but ?setGeneric produces
the same garbled text.  I also tried telling ESS to use a different
working directory when launching R; it didn't help.


The last sentence of this paragraph is also garbled:

 The description above is the effect when the package that owns the
 non-generic function has not created an implicit generic version.
 Otherwise, it is this implicit generic function that is us_same_
 version of the generic function will be created each time.


Weird.

P.S. http://bugs.r-project.org was extremely sluggish, even timing out,
both yesterday and today for me.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Rd output garbled in some circumstances

2010-01-13 Thread Ross Boylan
I'm having trouble getting correct help output in some circumstances for
a package I've created. Though this is not an issue with the current R,
I would like my package to work with previous ones as well.

I'm looking for suggestions about how I could rework my .Rd file so that
it will work with prior R's.  In particular, R 2.7 is in the latest
stable release of Debian, so I'd like to solve the problem for 2.7.

The .Rd file is for a function and has an arguments section like this
\arguments{
  \item{formula}{ A formula giving the vectors containing
## skipped
covariates.  }
## skipped
  \item{stepdenominator}{See \code{stepnumerator} just above.}
  
  \item{do.what}{\describe{
\item{1}{By default, calculates a maximimum likelihood.  To evaluate
  a single likelihood, set all parameters to fixed. }
\item{0}{Count number of paths and related statistics without
  evaluating the likelihood.}
\item{-1}{Get detailed counts (but no likelihoods) associated with
  each case.  The return value is a matrix.}
\item{10}{Use the model to generate a random path for each
  case. returning a \code{data.frame} with simulated observed states
  and times and all other data as observed.}
}}

  \item{testing}{This argument is  only for use by developers.  Set it
## etc

This comes out fine in a pdf, but ?mspath (the function) produces, in
part,

stepdenominator: See 'stepnumerator' just above.

 1 By default, calculates a maximimum likelihood.  To evaluate a
  single likelihood, set all parameters to fixed. 

 0 Count number of paths and related statistics without evaluating
  the likelihood.

in R 2.7.  The "do.what" header has vanished.  In R 2.10 it's fine.

Is there an error in my documentation format?
Even if not, is there some change I could make that would get R 2.7 to
work better?

The R change log doesn't show anything obviously related to this, though
it has several references to unspecified fixes to the documentation
system.  I also tried looking at the bug tracker, but couldn't find
anything--in fact I had trouble identifying bugs in the documentation
system as opposed to bugs in the documentation.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] optional package dependency

2010-01-14 Thread Ross Boylan
I have a package that can use rmpi, but works fine without it.  None of
the automatic test code invokes rmpi functionality.  (One test file
illustrates how to use it, but has quit() as its first command.)

What's the best way to handle this?  In particular, what is the
appropriate form for upload to CRAN?

When I omitted rmpi from the DESCRITPION file R CMD check gave 

* checking R code for possible problems ... NOTE
alldone: no visible global function definition for ‘mpi.bcast.Robj’
alldone: no visible global function definition for ‘mpi.exit’

followed by many more warnings.

When I add
Suggests: Rmpi
in DESCRIPTION the check stops if the package is not installed:

* checking package dependencies ... ERROR
Packages required but not available:
  Rmpi

Rmpi is not required, but I gather from previous discussion on this list
that suggests basically means required for R CMD check.

NAMESPACE seems to raise similar issues; I don't see any mechanism for
optional imports.  Also, I have not used namespaces, and am not eager to
destabilize things so close to release.  At least, I hope I'm close to
release :)

Thanks for any advice.

Ross Boylan

P.S. Thanks, Duncan, for your recent advice on my help format problem
with R 2.7.  I removed the nested \description, and now things look OK.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optional package dependency

2010-01-15 Thread Ross Boylan
On Fri, 2010-01-15 at 09:19 +0100, Kurt Hornik wrote:
> The idea is that maintainers typically want to
> fully check their functionality, suggesting to force suggests by
> default.
This might be the nub of the problem.  There are different audiences,
even for R CMD check.

The maintainer probably wants to check all functionality.  Even then,
there is an issue if functionality differs by platform.

CRAN probably wants to check all functionality.

An individual user just wants to check the functionality they use.

For example, if someone doesn't want to run my package distributed, but
wants to see if it works (R CMD check), they need to be able to avoid
the potentially onerous requirement to install MPI.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optional package dependency (enhances)

2010-01-15 Thread Ross Boylan
On Fri, 2010-01-15 at 10:48 +, Benilton Carvalho wrote:
> How about using:
> 
> Enhances: Rmpi
> 
> ?
> 
> b
The main reason is that "enhances" seems a peculiar way to describe the
relation between a package that (optionally) uses a piece of
infrastructure and the infrastructure.  Similarly, I would not say that
a car enhances metal.  The example given in the R extension
documentation ("e.g., by providing methods for classes from these
packages") seems more in line with the usual meaning of enhance.

A secondary reason is that I can not tell from the documentation exactly
what putting a package in enhances does.  The example of adding
functionality to a class suggests that packages that are enhanced are
required.  However, clearly one could surround code that enhanced a
class from another package with a conditional, so that if the code was
skipped if the enhanced package was absent.  Even that logic isn't quite
right if the enhanced package is added later.

My package only loads/verifies the presence of rmpi if one attempts to
use the distributed features, so the relation is at run time, not load
time.

Ross
> 
> On Fri, Jan 15, 2010 at 6:00 AM, Ross Boylan  wrote:
> > I have a package that can use rmpi, but works fine without it.  None of
> > the automatic test code invokes rmpi functionality.  (One test file
> > illustrates how to use it, but has quit() as its first command.)
> >
> > What's the best way to handle this?  In particular, what is the
> > appropriate form for upload to CRAN?
> >
> > When I omitted rmpi from the DESCRITPION file R CMD check gave
> > 
> > * checking R code for possible problems ... NOTE
> > alldone: no visible global function definition for ‘mpi.bcast.Robj’
> > alldone: no visible global function definition for ‘mpi.exit’
> > 
> > followed by many more warnings.
> >
> > When I add
> > Suggests: Rmpi
> > in DESCRIPTION the check stops if the package is not installed:
> > 
> > * checking package dependencies ... ERROR
> > Packages required but not available:
> >  Rmpi
> > 
> > Rmpi is not required, but I gather from previous discussion on this list
> > that suggests basically means required for R CMD check.
> >
> > NAMESPACE seems to raise similar issues; I don't see any mechanism for
> > optional imports.  Also, I have not used namespaces, and am not eager to
> > destabilize things so close to release.  At least, I hope I'm close to
> > release :)
> >
> > Thanks for any advice.
> >
> > Ross Boylan
> >
> > P.S. Thanks, Duncan, for your recent advice on my help format problem
> > with R 2.7.  I removed the nested \description, and now things look OK.
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optional package dependency (suggestions/wishes)

2010-01-15 Thread Ross Boylan
On Fri, 2010-01-15 at 12:34 -0500, Simon Urbanek wrote:
> 
> On Jan 15, 2010, at 12:18 , Ross Boylan wrote:
> 
> > On Fri, 2010-01-15 at 09:19 +0100, Kurt Hornik wrote:
> >> The idea is that maintainers typically want to
> >> fully check their functionality, suggesting to force suggests by
> >> default.
> > This might be the nub of the problem.  There are different
> audiences,
> > even for R CMD check.
> >
> > The maintainer probably wants to check all functionality.  Even
> then,
> > there is an issue if functionality differs by platform.
> >
> > CRAN probably wants to check all functionality.
> >
> > An individual user just wants to check the functionality they use.
> >
> > For example, if someone doesn't want to run my package
> distributed,  
> > but
> > wants to see if it works (R CMD check), they need to be able to
> avoid
> > the potentially onerous requirement to install MPI.
> >
> 
> ... that what's why you can decide to run check without forcing  
> suggests  - it's entirely up to you / the user as Kurt pointed out ...
> 
> Cheers,
> Simon
This prompts a series of increasing ambitious suggestions:

1. DOCUMENTATION CHANGE
I suggest this info about _R_CHECK_FORCE_SUGGESTS_=false be added to R
CMD check --help.

Until Kurt's email I was unaware of the facility, and it seems to me the
average package user will be even less likely to know.  My concern is
that they would run R CMD check; it would fail because a package such as
rmpi is absent; and the user will throw up their hands and give up.

I did find a Perl variable with similar name in section 1.3.3 of
"Writing R Extensions", but that section does not mention environment
variables.  It would also be unnatural for a package user to refer to
it.

Considering there are many variables, maybe the interactive help should
just note that customizing variables (without naming particular ones)
are available, and point to appropriate documentation

2. NEW BEHAVIOR/OPTIONS
On even more exotic wish would be to allow a list of suggested packages
to check.  That way, someone use some, but not all, optional facilities
could check the ones of interest.  Again, even with better documentation
it seems likely most people would be unaware of the feature.

3. SIGNIFICANTLY CHANGED BEHAVIOR
I think the optimal behavior would be for the check environment to
attempt to load all suggested packages, but continue even if some are
missing.  It would then be up to package authors to code appropriate
conditional tests for the presence or absence of suggested packages;
actually, that's probably true even now.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optional package dependency

2010-01-16 Thread Ross Boylan
On Sat, 2010-01-16 at 07:49 -0800, Seth Falcon wrote:
> Package authors
> should be responsible enough to test their codes with and without
> optional features.
It seems unlikely most package authors will have access to a full range
of platform types.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] calling setGeneric() twice

2010-01-19 Thread Ross Boylan
Is it safe to call setGeneric twice, assuming some setMethod's for the
target function occur in between?  By "safe" I mean that all the
setMethod's remain in effect, and the 2nd call is, effectively, a no-op.

?setGeneric says nothing explicit about this behavior that I can see.
It does say that if there is an existing implicity generic function it
will be (re?)used. I also tried ?Methods, google and the mailing list
archives.

I looked at the code for setGeneric, but I'm not confident how it
behaves.  It doesn't seem to do a simple return of the existing value if
a generic already exists, although it does have special handling for
that case.  The other problem with looking at the code--or running
tests--is that they only show the current behavior, which might change
later.

This came up because of some issues with the sequencing of code in my
package.  Adding duplicate setGeneric's seems like the smallest, and
therefore safest, change if the duplication is not a problem.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] calling setGeneric() twice

2010-01-19 Thread Ross Boylan
On Tue, 2010-01-19 at 10:05 -0800, Seth Falcon wrote:
> > This came up because of some issues with the sequencing of code in
> my
> > package.  Adding duplicate setGeneric's seems like the smallest, and
> > therefore safest, change if the duplication is not a problem.
> 
> I'm not sure of the answer to your question, but I think it is the
> wrong 
> question :-)
> 
> Perhaps you can provide more detail on why you are using multiple
> calls 
> to setGeneric.  That seems like a very odd thing to do.
My system is defined in a collection of .R files, most of which are
organized around classes.  So the typical file has a setClass(),
setGeneric()'s, and setMethod()'s.

If files that were read in later in the sequence extended an existing
generic, I omitted the setGeneric().

I had to resequence the order in which the files were read to avoid some
"undefined slot classes" warnings.  The resequencing created other
problems, including some cases in which I had a setMethod without a
previous setGeneric.

I have seen the advice to sequence the files so that class definitions,
then generic definitions, and finally function and method definitions
occur.  I am trying not to do that for two reasons.  First, I'm trying
to keep the changes I make small to avoid introducing errors.  Second, I
prefer to keep all the code related to a single class in a single file.

Some of the files were intended for free-standing use, and so it would
be useful if they could retain setGeneric()'s even if I also need an
earlier setGeneric to make the whole package work.

I am also working on a python script to extract all the generic function
defintions (that is, setGeneric()), just in case.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] calling setGeneric() twice (don't; documentation comments)

2010-01-19 Thread Ross Boylan
On Tue, 2010-01-19 at 12:55 -0800, Seth Falcon wrote:
> I would expect setGeneric to create a new generic function and
> nuke/mask 
> methods associated with the generic that it replaces.
I tried a test in R 2.7.1, and that is the behavior.  I think it would
be worthwhile to document it in ?setGeneric.

Also, ?setGeneric advocates first defining a regular function (e.g.,
bar) and then doing a simple setGeneric("bar").  I think the advice for
package developers is different, so perhaps some changes there would be
a good idea too.

I thought I was defining setGeneric twice for a few functions, and thus
that it did work OK.  It turns out I have no duplicate definitions.

Here's the test:
> setClass("A", representation(z="ANY"))
[1] "A"
> setClass("B", representation(y="ANY"))
[1] "B"
> setGeneric("foo", function(x) standardGeneric("foo"))
[1] "foo"
> setMethod("foo", signature(x="A"), function(x) return("foo for A"))
[1] "foo"
> 
> a=new("A")
> b=new("B")
> foo(a)
[1] "foo for A"
> foo(b)
Error in function (classes, fdef, mtable)  : 
  unable to find an inherited method for function "foo", for signature "B"
> setGeneric("foo", function(x) standardGeneric("foo"))
[1] "foo"
> setMethod("foo", signature(x="B"), function(x) return("foo for B"))
[1] "foo"
> 
> setGeneric("foo", function(x) standardGeneric("foo"))
[1] "foo"
> setMethod("foo", signature(x="B"), function(x) return("foo for B"))
[1] "foo"
> foo(a)
# here's where the disappearance of the prior setMethod shows
Error in function (classes, fdef, mtable)  : 
  unable to find an inherited method for function "foo", for signature "A"
> foo(b)
[1] "foo for B"

So I guess I am going to pull the setGeneric's out.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Rgeneric.py assists in rearranging generic function definitions

2010-01-21 Thread Ross Boylan
I've attached a script I wrote that pulls all the setGeneric definitions
out of a set of R files and puts them in a separate file, default
allGenerics.R.  I thought it might help others who find themselves in a
similar situation.

The "situation" was that I had to change the order in which files in my
package were parsed; the scheme in which the generic definition is in
the "first" file that has the corresponding setMethod breaks under
re-ordering.  So I pulled out all the definitions and put them first.

In retrospect, it is clearly preferable to create allGenerics.py from
the start.  If you didn't, and discover you should have, the script
automates the conversion.

Thanks to everyone who helped me with my packaging problems.  The
package finally made it to CRAN as
http://cran.r-project.org/web/packages/mspath/index.html.  I'll send a
public notice of that to the general R list.

Ross Boylan
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rgeneric.py assists in rearranging generic function definitions [inline]

2010-01-25 Thread Ross Boylan
On Thu, 2010-01-21 at 11:38 -0800, Ross Boylan wrote:
> I've attached a script I wrote that pulls all the setGeneric definitions
> out of a set of R files and puts them in a separate file, default
> allGenerics.R.  I thought it might help others who find themselves in a
> similar situation.
> 
> The "situation" was that I had to change the order in which files in my
> package were parsed; the scheme in which the generic definition is in
> the "first" file that has the corresponding setMethod breaks under
> re-ordering.  So I pulled out all the definitions and put them first.
> 
> In retrospect, it is clearly preferable to create allGenerics.py from
> the start.  If you didn't, and discover you should have, the script
> automates the conversion.
> 
> Thanks to everyone who helped me with my packaging problems.  The
> package finally made it to CRAN as
> http://cran.r-project.org/web/packages/mspath/index.html.  I'll send a
> public notice of that to the general R list.
> 
> Ross Boylan

Apparently the attachment didn't make it through. I've pasted
Rgeneric.py below.
#! /usr/bin/python
# python 2.5 required for with statement
from __future__ import with_statement

# Rgeneric.py extracts setGeneric definitions from R sources and 
# writes them to a special file, while removing them from the
# original.
#
# Context: In a system with several R files, having generic
# definitions sprinkled throughout, there are errors arising from the
# sequencing of files, or of definitions within files.  In general,
# changing the order in which files are parsed (e.g., by the Collate:
# filed in DESCRIPTION) will break things even when they were
# working.  For example, a setMethod may occur before the
# corresponding setGeneric, and then fail.  Given that it is not safe
# to call setGeneric twice for the same function, the cleanest
# solution may be to move all the generic definitions to a separate
# file that will be read before any of the setMethod's.  Rgeneric.py
# helps automate that process.
#
# It is, of course, preferable not to get into this situation in the
# first place, for example by creating an allGenerics.R file as you
# go.

# Typical useage: ./Rgeneric.py *.R
# Will create allGenerics.R with all the extracted generic
# definitions, including any preceding comments.
# Rewrites the *.R files, replacing the setGeneric's with comments
# indicating the generic has moved to allGenerics.py.
# *.R.old has the original .R files.
#
# The program does not work for all conceivable styles.  In
# particular, it assumes that
#1. setGeneric is immediately followed by an open parenthesis and
#   a quoted name of the function.  Subsequent parts of the
#   definition may be split across lines and have interspersed
#   comments.
#
#2. Comments precede the definition.  They are optional, and will
#   be left in place in the .R file and copied to allGenerics.R.
#
#3. If you first define an ordinary function foo, and then do
#   setGeneric("foo") the setGeneric will be moved to
#   allGenerics.R.  It will not work properly there; you should
#   make manual adjustments such as moving it back to the
#   original.  The code at the bottom reports on all such
#   definitions, and then lists all the generic functions processed.
#
#4. allGenerics.R will contain generic definitions in the order of
#   files examined, and in the order they are defined within the
#   file.  This is to preserve context for the comments, in
#   particular for comments which apply to a block of
#   definitions.  If you would like something else, e.g.,
#   alphabetical ordering, you should post-process the AllForKey
#   object created at the bottom of this file.


#
# There are program (not command line) options to do a read-only scan,
# and a class to hold the results, which can be inspected in various
# ways.

# Copyright 2010 Regents of University of California
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# See <http://www.gnu.org/licenses/> for the full license.

# Author: Ross Boylan 
#
# Revision History:
#
# 1.0 2010-01-21 Initial release.

import os, os.path, re, sys

class ParseGeneric:
"""Extract setGeneric functions and preceding comments in one file.
states of the parser:
findComment -- look for start of comment
inComment -- found c

[Rd] y ~ X -1 , X a matrix

2010-03-17 Thread Ross Boylan
While browsing some code I discovered a call to lm that used a formula y
~ X - 1, where X was a matrix.

Looking through the documentation of formula, lm, model.matrix and maybe
some others I couldn't find this useage (R 2.10.1).  Is it anything I
can count on in future versions?  Is there documentation I've
overlooked?

For the curious: model.frame on the above equation returns a data.frame
with 2 columns.  The second "column" is the whole X matrix.
model.matrix on that object returns the expected matrix, with the
transition from the odd model.frame to the regular matrix happening in
an .Internal call.

Thanks.
Ross

P.S. I would appreciate cc's, since mail problems are preventing me from
seeing list mail.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] y ~ X -1 , X a matrix

2010-03-17 Thread Ross Boylan
On Thu, 2010-03-18 at 00:57 +, ted.hard...@manchester.ac.uk wrote:
> On 17-Mar-10 23:32:41, Ross Boylan wrote:
> > While browsing some code I discovered a call to lm that used
> > a formula y ~ X - 1, where X was a matrix.
> > 
> > Looking through the documentation of formula, lm, model.matrix
> > and maybe some others I couldn't find this useage (R 2.10.1).
> > Is it anything I can count on in future versions?  Is there
> > documentation I've overlooked?
> > 
> > For the curious: model.frame on the above equation returns a
> > data.frame with 2 columns.  The second "column" is the whole X
> > matrix. model.matrix on that object returns the expected matrix,
> > with the transition from the odd model.frame to the regular
> > matrix happening in an .Internal call.
> > 
> > Thanks.
> > Ross
> > 
> > P.S. I would appreciate cc's, since mail problems are preventing
> > me from seeing list mail.
> 
> Hmmm ... I'm not sure what is the problem with what you describe.
There is no problem in the "it doesn't work" sense.
There is a problem that it seems undocumented--though the help you quote
could rather indirectly be taken as a clue--and thus, possibly, subject
to change in later releases.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] problems extracting parts of a summary object

2010-03-22 Thread Ross Boylan
summary(x), where x is the output of lm, produces the expectedd display,
including standard errors of the coefficients.

summary(x)$coefficients produces a vector (x is r$individual[[2]]):
> r$individual[[2]]$coefficients
tX(Intercept)tXcigspmkrtXpeldtXsmkprevemn
-2.449188e+04 -4.143249e+00  4.707007e+04 -3.112334e+01  1.671106e-01
   mncigspmkrmnpeldmnsmkpreve
 3.580065e+00  2.029721e+05  4.404915e+01
> class(r$individual[[2]]$coefficients)
[1] "numeric"

rather than the expected matrix like object with a column for the se's.

When I trace through the summary method, the coefficients value is a
matrix.

I'm trying to pull out the standard errors for some rearranged output.

How can I do that?

And what's going on?  I suspect this may be a namespace issue.

Thanks.
Ross Boylan

P.S. I would appreciate a cc because of some mail problems I'm having.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] problems extracting parts of a summary object

2010-03-22 Thread Ross Boylan
On Mon, 2010-03-22 at 16:30 -0600, tpl...@cybermesa.com wrote:
> Are you sure you're extracting the coefficients component of the summary
> object and not the lm object?
> 
> Seems to work ok for me:
> > xm <- lm(y ~ x, data=data.frame(y=rnorm(20), x=rnorm(2)))
> > summary(xm)$coefficients
> Estimate Std. Error  t value  Pr(>|t|)
> (Intercept) 1.908948   1.707145 1.118210 0.2781794
> x   1.292263   1.174608 1.100165 0.2857565
> > xm$coefficients
> (Intercept)   x
>1.9089481.292263
> 
> -- Tony Plate
> class(summary(r$individual[[2]]))
[1] "summary.lm"
But maybe I'm not following the question.
Ross

> 
> On Mon, March 22, 2010 4:03 pm, Ross Boylan wrote:
> > summary(x), where x is the output of lm, produces the expectedd display,
> > including standard errors of the coefficients.
> >
> > summary(x)$coefficients produces a vector (x is r$individual[[2]]):
> >> r$individual[[2]]$coefficients
> > tX(Intercept)tXcigspmkrtXpeldtXsmkprevemn
> > -2.449188e+04 -4.143249e+00  4.707007e+04 -3.112334e+01  1.671106e-01
> >mncigspmkrmnpeldmnsmkpreve
> >  3.580065e+00  2.029721e+05  4.404915e+01
> >> class(r$individual[[2]]$coefficients)
> > [1] "numeric"
> >
> > rather than the expected matrix like object with a column for the se's.
> >
> > When I trace through the summary method, the coefficients value is a
> > matrix.
> >
> > I'm trying to pull out the standard errors for some rearranged output.
> >
> > How can I do that?
> >
> > And what's going on?  I suspect this may be a namespace issue.
> >
> > Thanks.
> > Ross Boylan
> >
> > P.S. I would appreciate a cc because of some mail problems I'm having.
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >
> >
> 
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] problems extracting parts of a summary object [solved]

2010-03-22 Thread Ross Boylan
On Mon, 2010-03-22 at 16:52 -0600, tpl...@cybermesa.com wrote:
> what's the output of:
> > summary(r$individual[[2]])$coef
> 
> my question was basically whether you were doing
> summary(r$individual[[2]])$coef
> or
> r$individual[[2]]$coef
> 
> (the second was what you appeared to be doing from your initial email)
> 
Doh!  Thank you; that was it.
This was interacting with another error, which is perhaps how I managed
to miss it.

Ross
> -- Tony Plate
> 
> On Mon, March 22, 2010 4:43 pm, Ross Boylan wrote:
> > On Mon, 2010-03-22 at 16:30 -0600, tpl...@cybermesa.com wrote:
> >> Are you sure you're extracting the coefficients component of the summary
> >> object and not the lm object?
> >>
> >> Seems to work ok for me:
> >> > xm <- lm(y ~ x, data=data.frame(y=rnorm(20), x=rnorm(2)))
> >> > summary(xm)$coefficients
> >> Estimate Std. Error  t value  Pr(>|t|)
> >> (Intercept) 1.908948   1.707145 1.118210 0.2781794
> >> x   1.292263   1.174608 1.100165 0.2857565
> >> > xm$coefficients
> >> (Intercept)   x
> >>1.9089481.292263
> >>
> >> -- Tony Plate
> >> class(summary(r$individual[[2]]))
> > [1] "summary.lm"
> > But maybe I'm not following the question.
> > Ross
> >
> >>
> >> On Mon, March 22, 2010 4:03 pm, Ross Boylan wrote:
> >> > summary(x), where x is the output of lm, produces the expectedd
> >> display,
> >> > including standard errors of the coefficients.
> >> >
> >> > summary(x)$coefficients produces a vector (x is r$individual[[2]]):
> >> >> r$individual[[2]]$coefficients
> >> > tX(Intercept)tXcigspmkrtXpeldtXsmkprevemn
> >> > -2.449188e+04 -4.143249e+00  4.707007e+04 -3.112334e+01  1.671106e-01
> >> >mncigspmkrmnpeldmnsmkpreve
> >> >  3.580065e+00  2.029721e+05  4.404915e+01
> >> >> class(r$individual[[2]]$coefficients)
> >> > [1] "numeric"
> >> >
> >> > rather than the expected matrix like object with a column for the
> >> se's.
> >> >
> >> > When I trace through the summary method, the coefficients value is a
> >> > matrix.
> >> >
> >> > I'm trying to pull out the standard errors for some rearranged output.
> >> >
> >> > How can I do that?
> >> >
> >> > And what's going on?  I suspect this may be a namespace issue.
> >> >
> >> > Thanks.
> >> > Ross Boylan
> >> >
> >> > P.S. I would appreciate a cc because of some mail problems I'm having.
> >> >
> >> > __
> >> > R-devel@r-project.org mailing list
> >> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >> >
> >> >
> >>
> >>
> >
> >
> >
> 
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Problem using ofstream in C++ class in package for MacOS X

2007-02-08 Thread Ross Boylan
On Sun, Feb 04, 2007 at 10:47:37PM +0100, cstrato wrote:
> Seth Falcon wrote:
> > cstrato <[EMAIL PROTECTED]> writes:
> >   
> >> Thank you for your fast answer.
> >> Sorrowly, I don´t know how to use a debugger on MacOS X, I am using 
> >> old-style print commands.
> >> 
> >
> > You should be able to use gdb on OS X (works for me, YMMV).  So you
> > could try:
> >
> >   R -d gdb
> >   run
> >   # source a script that causes crash
> >   # back in gdb, use backtrace, etc.
> >
> > + seth
> >
> >
> >   
> Dear Seth
> 
> Thank you for this tip, I just tried it and here is the result:
> 
> Welcome to MyClass
>  > writeFileCpp("myout_fileCpp.txt")
> [1] "outfile =  myout_fileCpp.txt"
> Writing file myout_fileCpp.txt using C++ style.
> ---MyClassA::MyClassA()-
> ---MyClassA::WriteFileCpp-
> 
> Program received signal EXC_BAD_ACCESS, Could not access memory.
> Reason: KERN_PROTECTION_FAILURE at address: 0x0006
> 0x020fe231 in std::ostream::flush (this=0x214f178) at 
> /Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc:395
> 395 
> /Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc: 
> No such file or directory.
> in 
> /Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc
> (gdb)
> 
> It seems that it cannot find ostream.tcc, whatever this extension means.
> 
> Best regards
> Christian

I also don't see what the problem is, but have a couple of thoughts.
Under OS-X there is an environment variable you can define to get the
dynamic linker to load debug versions of libraries.  I can't remember
what it is, but maybe something like DYLD_DEBUG (but probably DEBUG is
part of the value of the variable).

For that, or the tracing above, to be fully informative you need to
have installed the appropriate debugging libraries and sources.

You may need to set an explicit source search path in gdb to pick up
the source files.

Try stepping through the code from write before the crash to determine
exactly where it runs into trouble.

Does the output file you are trying to create exist?

Unfortunately, none of this really gets at your core bug, but it might
help track it down.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] One possible use for threads in R

2007-02-08 Thread Ross Boylan
I have been using R on a cluster with some work that does not
parallelize neatly because the time individual computations take
varies widely and unpredictably.

So I've considered implementing a work-stealing arrangement, in which
idle nodes grab tasks from busy nodes.  It might also be useful for
nodes to communicate results with each other.

My first thought on handling this was to have one R thread that
managed the communication, and 2 that managed computation (each node
is dual-processor).

Previous discussion has noted that R is not multi-threaded, and also
asked what use cases multi-threading might address.  So here's a use
case.

The advantage of having R doing the communication is that it's easy to
pass R-level objects around using, e.g., Rmpi.  The advantage of
having the communicator and the calculators share the same thread is
that work and information the communicator got would be immediately
available to the calculators.

Other comments suggested IPC is fast (though one comment referred
specifically to Linux, and the cluster is OS-X), so it may be quite
workable to have each thread in a separate process.

I'm not at all sure the implementation I sketched above is the best
approach to this problem (or even that it would be if R were
multi-threaded), but it does seem to me this might be one area where
threads would be handy in R.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Problem using ofstream in C++ class in package for MacOS X

2007-02-08 Thread Ross Boylan
On Thu, Feb 08, 2007 at 10:04:14PM +0100, cstrato wrote:
> Ross Boylan wrote:
> >On Sun, Feb 04, 2007 at 10:47:37PM +0100, cstrato wrote:
> >  
> >>Seth Falcon wrote:
> >>
> >>>cstrato <[EMAIL PROTECTED]> writes:
> >>>  
> >>>  
> >>>>Thank you for your fast answer.
> >>>>Sorrowly, I don´t know how to use a debugger on MacOS X, I am using 
> >>>>old-style print commands.
> >>>>
> >>>>
> >>>You should be able to use gdb on OS X (works for me, YMMV).  So you
> >>>could try:
> >>>
> >>>  R -d gdb
> >>>  run
> >>>  # source a script that causes crash
> >>>  # back in gdb, use backtrace, etc.
> >>>
> >>>+ seth
> >>>
> >>>
> >>>  
> >>>  
> >>Dear Seth
> >>
> >>Thank you for this tip, I just tried it and here is the result:
> >>
> >>Welcome to MyClass
> >> > writeFileCpp("myout_fileCpp.txt")
> >>[1] "outfile =  myout_fileCpp.txt"
> >>Writing file myout_fileCpp.txt using C++ style.
> >>---MyClassA::MyClassA()-
> >>---MyClassA::WriteFileCpp-
> >>
> >>Program received signal EXC_BAD_ACCESS, Could not access memory.
> >>Reason: KERN_PROTECTION_FAILURE at address: 0x0006
> >>0x020fe231 in std::ostream::flush (this=0x214f178) at 
> >>/Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc:395
> >>395 
> >>/Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc: 
> >>No such file or directory.
> >>in 
> >>/Builds/unix/o403/i686-apple-darwin8/libstdc++-v3/include/bits/ostream.tcc
> >>(gdb)
> >>
> >>It seems that it cannot find ostream.tcc, whatever this extension means.
> >>
> >>Best regards
> >>Christian
> >>
> >
> >I also don't see what the problem is, but have a couple of thoughts.
> >Under OS-X there is an environment variable you can define to get the
> >dynamic linker to load debug versions of libraries.  I can't remember
> >what it is, but maybe something like DYLD_DEBUG (but probably DEBUG is
> >part of the value of the variable).
> >
> >For that, or the tracing above, to be fully informative you need to
> >have installed the appropriate debugging libraries and sources.
> >
> >You may need to set an explicit source search path in gdb to pick up
> >the source files.
> >
> >Try stepping through the code from write before the crash to determine
> >exactly where it runs into trouble.
> >
> >Does the output file you are trying to create exist?
> >
> >Unfortunately, none of this really gets at your core bug, but it might
> >help track it down.
> >
> >Ross
> >
> >
> >  
> Dear Ross
> 
> Thank you, my problem is that I know exactly where the problem is but 
> not how to solve it.
> 
> I have installed R-2.4.1 on three different machines to test the package:
> - Intel-Laptop running Fedora Core 4: package is OK
> - PPC-PowerBook Titanium OS X 10.4.4: package is OK
> - Intel-MacBook Pro Core 2 Duo OS X 10.4.8:   C output OK, C++ output 
> crashes R
> 
> The following code in method WriteFileCpp() works, but gives no result:
> {
>  std::ofstream output(outfile);
>  output.close();
> }
> 
> The following code in method WriteFileCpp() crashes R:
> {
>  std::ofstream output(outfile);
>  output << "21" << endl;
>  output.close();
> }
> 
> It seems that on the Intel-MacBook Pro the operator "<<" is not 
> recognized, when called from within my package in R.
> In contrast, when compiled as a C++ library, the same code does work on 
> my Intel-Mac!
> 
> Best regards
> Christian
> 
> 
Knowing the line isn't as specific as knowing exactly where it was
when it crashed.

Your stack trace was
Thread 0 Crashed:
0   libstdc++.6.dylib 0x020fe231 std::basic_ostream >::flush() + 17 (ostream.tcc:395)
1   libstdc++.6.dylib 0x020fe358 std::basic_ostream
 >::sentry::sentry[in-charge](std::basic_ostream >&) + 120 (ostream.tcc:56)
2   libstdc++.6.dylib 0x02100b5d std::basic_ostream >& std::operator<< 
 >(std::basic_ostream >&, char const*) + 29
(ostream.tcc:620)
3   MyClass.so0x0004a30f MyClassA::WriteFileCpp(char const*)
which doesn't look as if the problem is that <

Re: [Rd] Problem using ofstream in C++ class in package for MacOS X

2007-02-08 Thread Ross Boylan
On Thu, Feb 08, 2007 at 11:53:21PM +0100, cstrato wrote:
...
> >Maybe there's some subtle linker problem, or a problem with the
> >representation of strings
> >
> >
> >  
> What do you mean with linker problem?
> 
Nothing very specific, but generically wrong options, wrong
objects/libraries, or wrong order of the first 2.  "Wrong" includes
omitting something that should be there or including something that
shouldn't.

Linking on OS-X is unconventional relative to other systems I have
used.  In particular, one usually gets lots of errors about duplicate
symbols (which can be turned off, at some risk) and needs to specify
flat rather than 2-level namespace.  There's lots more if you look at
the linker page (man ld).

Similar issues can arise at the compiler phase too.

Another fun thing on OS-X is that they have a libtool that is
different from the GNU libtool, and your project might use both.  So
you need to be sure to get the right one.  But it's unlikely you could
even build if that were an issue.

If different parts (e.g., R vs your code) are built with different
options, that can cause trouble.

For example, my Makefile has
MAINCXXFLAGS :=  $(shell R CMD config --cppflags) -std=c++98 -Wall 
-I$(TRUESRCDIR)
This relies on GNU make features.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pinning down symbol values (Scoping/Promises) question

2007-02-16 Thread Ross Boylan
I would like to define a function using symbols, but freeze the symbols
at their current values at the time of definition.  Both symbols
referring to the global scope and symbols referring to arguments are at
issue. Consider this (R 2.4.0):
> k1 <- 5
> k
[1] 100
> a <- function(z) function() z+k
> a1 <- a(k1)
> k1 <- 2
> k <- 3
> a1()
[1] 5
> k <- 10
> k1 <- 100
> a1()
[1] 12

First, I'm a little surprised that that the value for k1 seems to get
pinned by the initial evaluation of a1.  I expected the final value to
be 110 because the z in z+k is a promise.

Second, how do I pin the values to the ones that obtain when the
different functions are invoked?  In other words, how should a be
defined so that a1() gets me 5+100 in the previous example?

I have a partial solution (for k), but it's ugly.  With k = 1 and k1 =
100,
> a <- eval(substitute(function(z) function() z+x, list(x=k)))
> k <- 20
> a1 <- a(k1)
> a1()
[1] 101
(by the way, I thought a <- eval(substitute(function(z) function() z+k))
would work, but it didn't).

This seems to pin the passed in argument as well, though it's even
uglier:
> a <- eval(substitute(function(z) { z; function() z+x}, list(x=k)))
> a1 <- a(k1)
> k1 <- 5
> a1()
[1] 120

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R Language Manual: possible error

2007-02-16 Thread Ross Boylan
The R Language manual, section 4.3.4 ("Scope"), has
 f <- function(x) {
 y <- 10
 g <- function(x) x + y
 return(g)
 }
 h <- f()
 h(3)
  
... When `h(3)' is evaluated we see that its body is that of `g'.
Within that body `x' and `y' are unbound.

Is that last sentence right?  It looks to me as if x is a bound
variable, and the definitions given in the elided material seem to say
so too.  I guess there is hidden, outer, x that is unbound.  Maybe the
example was meant to be 
g <- function(a) a + y?

The front page of the manual says
 The current version of this document is 2.4.0 (2006-11-25) DRAFT.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] "try"ing to understand condition handling

2007-02-19 Thread Ross Boylan
I'm confused by the page documenting tryCatch and friends.

I think it describes 3 separate mechanisms: tryCatch (in which control
returns to the invoking tryCatch), withCallHandlers (in which control
goes up to the calling handler/s but then continues from the point at
which signalCondition() was invoked), and withRestarts (I can't tell
where control ends up).

For tryCatch the docs say the arguments ... provide handlers, and that
these are matched to the condition.  It appears that matching works by
providing entries in ... as named arguments, and the handler matches
if the name is one of the classes of the condition.  Is that right?  I
don't see the matching rule explicitly stated.  And then  the handler
itself is a single argument function, where the argument is the
condition?

My reading is that if some code executes signalCondition and it is
running inside a tryCatch, control will not return to the line after
the signalCondition.  Whereas, if the context is withCallHandlers,
the call to signalCondition does return (with a NULL) and execution
continues.  That seems odd; do I have it right?

Also, the documents don't explicitly say that the abstract subclasses
of 'error' and 'warning' are subclasses of 'condition', though that
seems to be implied and true.

It appears that for tryCatch only the first matching handler is
executed, while for withCallHandlers all matching handlers are
executed.

And, finally, with restarts there is again the issue of how the name
in the name=function form gets matched to the condition, and the more
basic question of what happens.  My guess is that control stays with
the handler, but then this mechanism seems very similar to tryCatch
(with the addition of being able to pass extra arguments to the
handler and  maybe a more flexible handler specification).

Can anyone clarify any of this?

P.S. Is there any mechanism that would allow one to trap an interrupt,
like a ctl-C,  so that if the user hit ctl-C some state would be
changed but execution would then continue where it was?  I have in
mind the ctl-C handler setting a "time to finish up" flag which the
maini code checks from time to time.

Thanks.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] "try"ing to understand condition handling

2007-02-20 Thread Ross Boylan
On Tue, Feb 20, 2007 at 07:35:51AM +, Prof Brian Ripley wrote:
> Since you have not told us what 'the documents' are (and only vaguely 
> named one), do you not think your own documentation is inadequate?
I mean the command description produced by ?tryCatch.
> 
> There are documents about the condition system on developer.r-project.org: 
> please consult them.
> 
OK, though I would hope the user level documentation would suffice.
> I quess 'ctl-C' is your private abbreviation for 'control C' (and not a 
> type of cancer): that generates an interrrupt in most (but not all) R 
> ports.  Where it does, you can set up interrupt handlers (as the help page 
> said)
> 
My P.S. concerned whether the code that was interrupted could continue
from the point of interruption.  As far as I can tell from ?tryCatch
there is not,

> 
> On Mon, 19 Feb 2007, Ross Boylan wrote:
> 
> >I'm confused by the page documenting tryCatch and friends.
> >
> >I think it describes 3 separate mechanisms: tryCatch (in which control
> >returns to the invoking tryCatch), withCallHandlers (in which control
> >goes up to the calling handler/s but then continues from the point at
> >which signalCondition() was invoked), and withRestarts (I can't tell
> >where control ends up).
> >
> >For tryCatch the docs say the arguments ... provide handlers, and that
> >these are matched to the condition.  It appears that matching works by
> >providing entries in ... as named arguments, and the handler matches
> >if the name is one of the classes of the condition.  Is that right?  I
> >don't see the matching rule explicitly stated.  And then  the handler
> >itself is a single argument function, where the argument is the
> >condition?
> >
> >My reading is that if some code executes signalCondition and it is
> >running inside a tryCatch, control will not return to the line after
> >the signalCondition.  Whereas, if the context is withCallHandlers,
> >the call to signalCondition does return (with a NULL) and execution
> >continues.  That seems odd; do I have it right?
> >
> >Also, the documents don't explicitly say that the abstract subclasses
> >of 'error' and 'warning' are subclasses of 'condition', though that
> >seems to be implied and true.
> >
> >It appears that for tryCatch only the first matching handler is
> >executed, while for withCallHandlers all matching handlers are
> >executed.
> >
> >And, finally, with restarts there is again the issue of how the name
> >in the name=function form gets matched to the condition, and the more
> >basic question of what happens.  My guess is that control stays with
> >the handler, but then this mechanism seems very similar to tryCatch
> >(with the addition of being able to pass extra arguments to the
> >handler and  maybe a more flexible handler specification).
> >
> >Can anyone clarify any of this?
> >
> >P.S. Is there any mechanism that would allow one to trap an interrupt,
> >like a ctl-C,  so that if the user hit ctl-C some state would be
> >changed but execution would then continue where it was?  I have in
> >mind the ctl-C handler setting a "time to finish up" flag which the
> >maini code checks from time to time.
> >
> >Thanks.
> >
> >Ross Boylan
> >
> >__
> >R-devel@r-project.org mailing list
> >https://stat.ethz.ch/mailman/listinfo/r-devel
> >
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] "try"ing to understand condition handling (interrupts)

2007-02-20 Thread Ross Boylan
[resequencing and deleting for clarity]
On Tue, Feb 20, 2007 at 01:15:25PM -0600, Luke Tierney wrote:
> On Tue, 20 Feb 2007, Ross Boylan wrote:
> >>>P.S. Is there any mechanism that would allow one to trap an interrupt,
> >>>like a ctl-C,  so that if the user hit ctl-C some state would be
> >>>changed but execution would then continue where it was?  I have in
> >>>mind the ctl-C handler setting a "time to finish up" flag which the
> >>>maini code checks from time to time.
> 
> >On Tue, Feb 20, 2007 at 07:35:51AM +, Prof Brian Ripley wrote:
...

> >>I quess 'ctl-C' is your private abbreviation for 'control C' (and
> >>not a
[yes, ctl-C = control C, RB]
> >>type of cancer): that generates an interrrupt in most (but not all) R
> >>ports.  Where it does, you can set up interrupt handlers (as the help page
> >>said)
> >>
> >My P.S. concerned whether the code that was interrupted could continue
> >from the point of interruption.  As far as I can tell from ?tryCatch
> >there is not,
> 
> Currently interrupts cannot be handled in a way that allows them to
> continue at the point of interruption.  On some platforms that is not
> possible in all cases, and coming close to it is very difficult.  So
> for all practical purposes only tryCatch is currently useful for
> interrupt handling.  At some point disabling interrupts will be
> possible from the R level but currently I believe it is not.
> 
> Best,
> 
> luke
I had suspected that, since R is not thread-safe, handling
asynchronous events might be challenging.

I tried the following experiment on Linux:

> h<-function(e) print("Got You!")
> f<-function(n, delay) for (i in seq(n)) {Sys.sleep(delay); print(i)}
> withCallingHandlers(f(7,1), interrupt=h)
[1] 1
[1] "Got You!"

So in this case the withCallingHandlers acts like a tryCatch, in that
control does not return to the point of interruption.  However,
sys.calls within h does show where things were just before the
interrupt:
> h<-function(e) {print("Got You!"); print(sys.calls());}
> withCallingHandlers(f(7,1), interrupt=h)
[1] 1
[1] 2
[1] 3
[1] "Got You!"
[[1]]
withCallingHandlers(f(7, 1), interrupt = h)

[[2]]
f(7, 1)

[[3]]
Sys.sleep(delay)

[[4]]
function (e)
{
print("Got You!")
print(sys.calls())
}(list())


Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] "try"ing to understand condition handling

2007-02-20 Thread Ross Boylan
Thanks; your response is very helpful.  This message has some remarks
on my questions relative to the developer docs, one additional
question, and some documentation comments.

I'm really glad to hear you plan to revise the exception/condition
docs. since I  found the existing ones a bit murky.

Below, [1] means
http://www.stat.uiowa.edu/~luke/R/exceptions/simpcond.html,
one of the documents Prof Ripley referred to.

That page also has a nice illustration of using the restart facility.

On Tue, Feb 20, 2007 at 01:40:11PM -0600, Luke Tierney wrote:
> On Mon, 19 Feb 2007, Ross Boylan wrote:
> 
> >I'm confused by the page documenting tryCatch and friends.
> >
> >I think it describes 3 separate mechanisms: tryCatch (in which control
> >returns to the invoking tryCatch), withCallHandlers (in which
  should have been "withCallingHandlers"
> >control
> >goes up to the calling handler/s but then continues from the point at
> >which signalCondition() was invoked),
> 
> unless a handler does a non-local exit, typically by invoking a restart
> 
> >and withRestarts (I can't tell
> >where control ends up).
> 
> at the withRestarts call
> 
> >For tryCatch the docs say the arguments ... provide handlers, and that
> >these are matched to the condition.  It appears that matching works by
> >providing entries in ... as named arguments, and the handler matches
> >if the name is one of the classes of the condition.  Is that right?  I
> >don't see the matching rule explicitly stated.  And then  the handler
> >itself is a single argument function, where the argument is the
> >condition?
>From [1], while discussing tryCatch,

Handlers are specified as

name = fun

where name specifies an exception class and fun is a function of one
argument, the condition that is to be handled.  


...
> >
> >Also, the documents don't explicitly say that the abstract subclasses
> >of 'error' and 'warning' are subclasses of 'condition', though that
> >seems to be implied and true.
The class relations are explicit in [1].

> >
> >It appears that for tryCatch only the first matching handler is
> >executed, while for withCallHandlers all matching handlers are
> >executed.
> 
> All handlers are executed, most recently established first, until
> there are none left or there is a transfer of control.  Conceptually,
> exiting handlers established with tryCatch execute a transfer of
> control and then run their code.

Here's the one point of clarification: does the preceding paragraph
about "all handlers are executed" apply only to withCallingHandlers,
or does it include tryCatch as well?  Rereading ?tryCatch, it still
looks as if the first match only will fire.

> 
> Hopefully a more extensive document on this will get written in the
> next few months; for now the notes available off the developer page
> may be useful.
> 
> best,
> 
> luke
Great.  FWIW, here are some suggestions about the documentation:

I would find a presentation that provided an overall orientation and
then worked down easiest to follow.  So, goiing from the top down: 
1. there are 3 forms of exception handling: try/catch, calling
handlers and restarts.
2. the characteristic behavior of each is ... (i.e., what's the flow
of control).  Maybe give a snippet of typical uses of each.
3. the details (exact calling environment of the handler(s), matching
rules, syntax...)
4. try() is basically a convenient form of tryCatch.
5. Other relations between these 3 forms: what happens if they are
nested; how restarts alter the "standard" control flow of the other
forms.  I also found the info that the restart mechanism is the most
general and complicated useful for orientation (that might go under
point 1).

It might be appropriate to document each form on a separate manual
page; I'm not sure if they are too linked (particularly by the use of
conditions and the control flow of restart) to make that a good idea.

I notice that some of the outline above is not the standard R manual
format; maybe the big picture should go in the language manual or on a
concept page (?Exceptions maybe).

Be explicit about the relations between conditions (class inheritance
relations).

Be explicit about how handlers are chosen and which forms they take.

It might be worth mentioning stuff that is a little surprising.  The
fact that the value of  the finally is not the value of the tryCatch
was a little surprising, since usually the value of a series of
statements or expression is that of the last one.  The fact that
signalCondition can participate in two different flows of control
(discussion snipped above) was also surprising to me.  In both cases
the current ?tryCatch is pretty explicit already, so that

Re: [Rd] R/C++/memory leaks

2007-02-25 Thread Ross Boylan
On Sun, Feb 25, 2007 at 05:37:24PM +, Ernest Turro wrote:
> Dear all,
> 
> I have wrapped a C++ function in an R package. I allocate/deallocate  
> memory using C++ 'new' and 'delete'. In order to allow user  
> interrupts without memory leaks I've moved all the delete statements  
> required after an interrupt to a separate C++ function freeMemory(),  
> which is called using on.exit() just before the .C() call.

Do you mean that you call on.exit() before the .C, and the call to
on.exit() sets up the handler?  Your last sentence sounds as if you
invoke freeMemory() before the .C call.

Another approach is to associate your C objects with an R object, and
have them cleaned up when the R object gets garbage collected.
However, this requires switching to a .Call interface from the more
straightforward .C interface.

The finalizer call I used doesn't assure cleanup on exit. The optional
argument to R_RegisterCFinalizerEx might provide such assurance, but I
couldn't tell what it really does.  Since all memory should
be released by the OS, when the process ends, I wasn't so worried
about that.


Here's the pattern:
// I needed R_NO_REMAP to avoid name collisions.  You may not.
#define R_NO_REMAP 1
#include 
#include 

extern "C" {
// returns an |ExternalPtr|
SEXP makeManager(
@);


// user should not need to call
// cleanup
void finalizeManager(SEXP ptr);

}

SEXP makeManager(
@){
//  stuff

Manager* pmanager = new Manager(pd, pm.release(), 
*INTEGER(stepNumerator), *INTEGER(stepDenominator),
(*INTEGER(isexact)) != 0);

// one example didn't use |PROTECT()|
SEXP ptr;
Rf_protect(ptr = R_MakeExternalPtr(pmanager, R_NilValue, R_NilValue));
R_RegisterCFinalizer(ptr, (R_CFinalizer_t) finalizeManager);
Rf_unprotect(1);
return ptr;

}

void finalizeManager(SEXP ptr){
  Manager *pmanager = static_cast(R_ExternalPtrAddr(ptr));
  delete pmanager;
  R_ClearExternalPtr(ptr);
}

I'd love to hear from those more knowledgeable about whether I did
that right, and whether the FinalizerEx call can assure cleanup on
exit.

Make manager needes to be called from R like this
  mgr <- .Call("makeManager", args)

> 
> I am concerned about the following. In square brackets you see R's  
> total virtual memory use (VIRT in `top`):
> 
> 1) Load library and data [178MB] (if I run gc(), then [122MB])
> 2) Just before .C [223MB]
> 3) Just before freeing memory [325MB]
So you explicitly call your freeMemory() function?
> 4) Just after freeing memory [288MB]
There are at least 3 possibilities:
  * your C++ code is leaking
  * C++ memory is never really returned (Commonly, at least in C, the
  amount of memory allocated to the process never goes down, even if
  you do a free.  This may depend on the OS and the specific calls the
  program makes.
  * You did other stuff in R  that's still around.  After all you went
  up +45MB between 1 and 2; maybe it's not so odd you went up +65MB
  between 2 and 4.
> 5) After running gc() [230MB]
> 
> So although the freeMemory function works (frees 37MB), R ends up  
> using 100MB more after the function call than before it. ls() only  
> returns the data object so no new objects have been added to the  
> workspace.
> 
> Do any of you have any idea what could be eating this memory?
> 
> Many thanks,
> 
> Ernest
> 
> PS: it is not practical to use R_alloc et al because C++ allocation/ 
> deallocation involves constructors/destructors and because the C++  
> code is also compiled into a standalone binary (I would rather avoid  
> maintaining two separate versions).

I use regular C++ new's too (except for the external pointer that's
returned).  However, you can override the operator new in C++ so that
it uses your own allocator, e.g., R_alloc.  I'm not sure about all the
implications that might make that dangerous (e.g., can the memory be
garbage collected?  can it be moved?).  Overriding new is a bit tricky
since there are several variants.  In particular, there is one with
and one without an exception.  Also, invdividual classes can define
their own new operators; if you have any, you'd need to change those
too.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R/C++/memory leaks

2007-02-25 Thread Ross Boylan
Here are a few small follow-up comments:
On Sun, Feb 25, 2007 at 11:18:56PM +, Ernest Turro wrote:
> 
> On 25 Feb 2007, at 22:21, Ross Boylan wrote:
> 
> >On Sun, Feb 25, 2007 at 05:37:24PM +, Ernest Turro wrote:
> >>Dear all,
> >>
> >>I have wrapped a C++ function in an R package. I allocate/deallocate
> >>memory using C++ 'new' and 'delete'. In order to allow user
> >>interrupts without memory leaks I've moved all the delete statements
> >>required after an interrupt to a separate C++ function freeMemory(),
> >>which is called using on.exit() just before the .C() call.
> >
> >Do you mean that you call on.exit() before the .C, and the call to
> >on.exit() sets up the handler?  Your last sentence sounds as if you
> >invoke freeMemory() before the .C call.
> >
> 
> " 'on.exit' records the expression given as its argument as needing
>  to be executed when the current function exits (either naturally
>  or as the result of an error)."
> 
> 
> This means you call on.exit() somewhere at the top of the function.  
> You are guaranteed the expression you pass to on.exit() will be  
> executed before the function returns. So, even though you call on.exit 
> () before .C(), the expression you pass it will actually be called  
> after .C().
> 
> This means you can be sure that freeMemory() is called even if an  
> interrupt or other error occurs.
> 
> 
> >Another approach is to associate your C objects with an R object, and
> >have them cleaned up when the R object gets garbage collected.
> >However, this requires switching to a .Call interface from the more
> >straightforward .C interface.
[details snipped]
> 
> Since this is a standalone C++ program too, I'd rather use the R API  
> as little as possible... But I will look at your solution if I find  
> it is really necessary.. Thanks

The use of the R api can be confined to a wrapper function.  But I can
think of no reason that a change to the alternate approach I outlined
would solve the apparent leaking you describe.

> 
> >>
> >>I am concerned about the following. In square brackets you see R's
> >>total virtual memory use (VIRT in `top`):
> >>
> >>1) Load library and data [178MB] (if I run gc(), then [122MB])
> >>2) Just before .C [223MB]
> >>3) Just before freeing memory [325MB]
> >So you explicitly call your freeMemory() function?
> 
> This is called thanks to on.exit()
> 
> >>4) Just after freeing memory [288MB]
> >There are at least 3 possibilities:
> >  * your C++ code is leaking
> 
> The number of news and deletes are the same, and so is their  
> branching... I don't think it is this.
> 
> >  * C++ memory is never really returned (Commonly, at least in C, the
> >  amount of memory allocated to the process never goes down, even if
> >  you do a free.  This may depend on the OS and the specific calls the
> >  program makes.
> 
> OK, but the memory should be freed after the process completes,
> surely?

Most OS's I know will free memory when a process finishes, except for
shared memory.  But is that relevant?  I assume the process doesn't
complete until you exit R.  Your puzzle seems to involve different
stages within the life of a single process.

> 
> >  * You did other stuff in R  that's still around.  After all you went
> >  up +45MB between 1 and 2; maybe it's not so odd you went up +65MB
> >  between 2 and 4.
> 
> Yep, I do stuff before .C and that accounts for the increase  
> before .C. But all the objects created before .C go out of scope by  
> 4) and so, after gc(), we should be back to 122MB. As I mentioned, ls 
> () after 5) returns only the data loaded in 1).

In principle (and according to ?on.exit) the expression registered by
on.exit is evaluated when the relevant function is exited.  In
principle garbage collection reclaims all unused space (though with no
guarantee of when).

It may be that the practice is looser than the principle.  For example,
Python always nominally managed memory for you, but I think for
quite awhile it didn't really reclaim the memory (because garbage
collection didn't exist or had been turned off).


> 
> >>5) After running gc() [230MB]
> >>
> >>So although the freeMemory function works (frees 37MB), R ends up
> >>using 100MB more after the function call than before it. ls() only
> >>returns the data object so no new objects have been added to the
> >>workspace.
> >>
> >>Do any of you have any idea what could be eating this memory?
> >>
> >>Many t

Re: [Rd] R/C++/memory leaks

2007-02-26 Thread Ross Boylan
On Mon, 2007-02-26 at 16:08 +, Ernest Turro wrote:
> Thanks for your comments Ross. A couple more comments/queries below:
> 
> On 26 Feb 2007, at 06:43, Ross Boylan wrote:
> 
> > [details snipped]
> >
> > The use of the R api can be confined to a wrapper function.  But I can
> > think of no reason that a change to the alternate approach I outlined
> > would solve the apparent leaking you describe.
> >
> 
> I'm not sure I see how a wrapper function using the R API would  
> suffice. Example:
It doesn't sound as if it would suffice.  I was responding to your
original remark that

> Since this is a standalone C++ program too, I'd rather use the R API  
> as little as possible... But I will look at your solution if I find  
> it is really necessary.. Thanks

I thought that was expressing a concern about using the alternate
approach I outlined because it would use the R API.  If you need to use
that API for other reasons, you're still stuck with it :)
> 
> During heavy computation in the C++ function I need to allow  
> interrupts from R. This means that R_CheckUserInterrupt needs to be  
> called during the computation. Therefore, use of the R API can't be  
> confined to just the wrapper function.
> 
> In fact, I'm worried that some of the libraries I'm using are failing  
> to release memory after interrupt and that that is the problem. I  
> can't see what I could do about that... E.g.
> 
> #include 
> 
> valarray foo; // I don't know 100% that the foo object hasn't  
> allocated some memory. if the program is interrupted it wouldn't be  
> released
That's certainly possible, but you seem to be overlooking the
possibility that all the code is releasing memory appropriately, but the
process's memory footprint isn't going down correspondingly.  In my
experience that's fairly typical behavior.

In that case, depending on your point of view, you either don't have a
problem or you have a hard problem.  If you really want the memory
released back to the system, it's a hard problem.  If you don't care, as
long as you have no leaks, all's well.

> 
> I find it's very unfortunate that R_CheckUserInterrupt doesn't return  
> a value. If it did (e.g. if it returned true if an interrupt has  
> occurred), I could just branch off somewhere, clean up properly and  
> return to R.
> 
> Any ideas on how this could be achieved?
I can't tell from the info page what function gets called in R if there
is an interrupt, but it sounds as you could do the following hack:
The R interrupt handler gets a function that calls a C function of your
devising.  The C function sets a flag meaning "interrupt requested".
Then in your main code, you periodically call R_CheckUserInterrupt.
When it returns you check the flag; if it's set, you cleanup and exit.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD check ignores .Rbuildignore?

2007-03-18 Thread Ross Boylan
The contents of .Rbuildignore seems to affect
R CMD build
but not 
R CMD check.

I'm using R 2.4.0 on Debian.

Is my understanding correct?  And is there anything I can do about it?
In my case, some of the excluded files contain references to other
libraries, so linking fails under R CMD check.  I realize I could add
the library to the build (with Makevars, I guess), but I do not want
to introduce the dependency.

Thanks.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R CMD check ignores .Rbuildignore?

2007-03-19 Thread Ross Boylan
On Mon, Mar 19, 2007 at 11:33:37AM +0100, Martin Maechler wrote:
> >>>>> "RossB" == Ross Boylan <[EMAIL PROTECTED]>
> >>>>> on Sun, 18 Mar 2007 12:39:14 -0700 writes:
> 
> RossB> The contents of .Rbuildignore seems to affect
> RossB> R CMD build
> RossB> but not 
> RossB> R CMD check.
> 
> RossB> I'm using R 2.4.0 on Debian.
> 
> RossB> Is my understanding correct?  
> 
> yes.  That's why it's called 'buildignore'.
> It's a big feature for me as package developer:
It's more of a bug for me :(.  I was thinking of check as answering
the question "If I build this package, will it work?"
> 
> E.g., I can have extra tests (which e.g. only apply on my
> specific platform) in addition to those which are use when the
> package is built (to be checked on all possible platforms).
> 
> Some have proposed to additionally define a '.Rcheckignore' 
> but they haven't been convincing enough.

How about an option to have check use the buildignore file?  If there
are 2 separate files, there's always the risk they will get out of
sync.  Of course, in your case, you want them out of sync...

> 
> RossB> And is there anything I can do about it?
> 
> "First build, then check" is one way;
> Something which is recommended anyway in some cases,
> e.g., if you have an (Sweave - based) vignette.
Kurt Hornick, offlist, also advised this, as well as noting that using
R CMD check directly on the main development directory isn't really
supported.

>From my perspective, needing to do a build before a check is extra
friction, which would reduce the amount of checking I do during
development.  Also, doesn't build do some of the same checks as check?

Minimally, I think some advice in the R Extensions manual needs to be
qualified:
In 
1.3 Checking and building packages
==

Before using these tools, please check that your package can be
installed and loaded.  `R CMD check' will _inter alia_ do this, but you
will get more informative error messages doing the checks directly.

---
There seem to be a couple of problems with this advice, aside from the
fact that it says to check first.  One problem is that the advice
seems internally inconsistent.  "Before using these tools" seems to
refer to the build and check tools in the section.  Since check is one
of the tools, it can't be used before using the tools.

Also, I still can't figure out what "doing the checks directly" refers
to.

Another section:
1.3.2 Building packages
---
[2 paragraphs skipped]

   Run-time checks whether the package works correctly should be
performed using `R CMD check' prior to invoking the build procedure.

Since this advice, check then build, is the exact opposite of the
current recommendations, build then check, it probably needs to be
changed.

R CMD check --help (and other spots) refer to the command as
"Check R packages from package sources, which can be directories or
gzipped package 'tar' archives with extension '.tar.gz' or '.tgz'."  I
read this as referring to my source directory, although I guess other
readings are possible (i.e., package source = the source bundle as
distributed).

> 
> RossB> In my case, some of the excluded files contain references to other
> RossB> libraries, so linking fails under R CMD check.  I realize I could 
> add
> RossB> the library to the build (with Makevars, I guess), but I do not 
> want
> RossB> to introduce the dependency.
> 
> It depends on the circumstances on how I would solve this
> problem 
> [Why have these files as part of the package sources at all?]

I have 2 kinds of checks, those at the R level, which get executed as
part of R CMD check, and C++ unit tests, which I execute separately.
The latter use the boost test library, part of which is a link-time
library that ordinary users should not need.

All the C++ sources come from a web (as in Knuth's web) file, so the
main files and the testing files all get produced at once.

I worked around the problem by using the top level config script to
delete the test files.  I suppose I could also look into moving those
files into another directory when they are produced, but that would
further complicate my build system.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R CMD check ignores .Rbuildignore? [correction]

2007-03-19 Thread Ross Boylan
On Mon, Mar 19, 2007 at 10:38:02AM -0700, Ross Boylan wrote:
> Kurt Hornick, offlist, also advised this, as well as noting that using
Sorry.  That should be "Kurt Hornik."
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rmpi and OpenMPI ?

2007-03-29 Thread Ross Boylan
On Tue, 2007-03-27 at 21:34 -0500, Dirk Eddelbuettel wrote:
> Has anybody tried to use Rmpi with the OpenMPI library instead of LAM/MPI?
> 
> LAM appears to be somewhat hardcoded in the Rmpi setup. Before I start to
> experiment with changing this, has anybody else tried Rmpi with non-LAM MPI
> implementations?
http://www.stats.uwo.ca/faculty/yu/Rmpi/, which seems to be the package
author's page, describes using MPICH under MS Windows.  I hope that's a
sign that RMPI is not too tied to LAM.

We're using LAM here, but interested in OpenMPI in hopes of better
integration with Sun Grid Engine.

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 generic surprise

2007-03-29 Thread Ross Boylan
I discovered the following behavior when source'ing the same file
repeatedly as I edited it.  My generic stopped acting like a generic.  I
can't tell from the docs what, if any, behavior is expected in this
case.  R 2.4.0

> foo <- function(object) 3
> isGeneric("foo")
[1] FALSE
> setMethod("foo", "matrix", function(object) 4)
Creating a new generic function for "foo" in ".GlobalEnv"
[1] "foo"
> foo(0)
[1] 3
> foo(matrix(0))
[1] 4
> isGeneric("foo")
[1] TRUE
# The next step is where things start to go astray
> foo <- function(object) 2
> isGeneric("foo")
[1] TRUE
> setMethod("foo", "matrix", function(object) 40)
[1] "foo"
> foo(0)
[1] 2
# isGeneric is TRUE, but method lookup no longer works
# I think the cause is that the function foo that used
# to do the dispatch has been overwritten by my plain old
# function that returns 2.
> foo(matrix(0))
[1] 2
> removeGeneric("foo")
[1] FALSE
Warning message:
generic function "foo" not found for removal in: removeGeneric("foo") 
> isGeneric("foo")
[1] TRUE

My mental model and R's diverged at the point I overwrote foo with a
regular function (foo <- function(object) 2).  At this point I thought R
would know that the function was no longer generic, and then would
rebuild the generic at the next setMethod.  Instead, R thought the
function remain generic, and so did not rebuild it at the next
setMethod.

If I had practiced the recommended style, I would have done
foo<-function(object) 2
setGeneric("foo")
and all would have been well.  So that's what I'll do.

I thought I'd report this in case others run into it, or somebody
considers this a matter that calls for documentation or R behavior
changes.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Replacing slot of S4 class in method of S4 class?

2007-03-30 Thread Ross Boylan
On Fri, Mar 30, 2007 at 10:45:38PM +0200, cstrato wrote:
> Dear all,
> 
> Assume that I have an S4 class "MyClass" with a slot "myname", which
> is initialized to:  myname="" in method("initialize"):
>myclass <- new("MyClass", myname="")
> 
> Assume that class "MyClass" has a method "mymethod":
>"mymethod.MyClass" <-
>function(object, myname=character(0), ...) {
>[EMAIL PROTECTED] <- myname;
>#or:myName(object) <- myname
>}
>setMethod("mymethod", "MyClass", mymethod.MyClass);
> 
> Furthermore, I have a replacement method:
> setReplaceMethod("myName", signature(object="MyClass", value="character"),
>function(object, value) {
>   [EMAIL PROTECTED] <- value;
>   return(object);
>}
> )
> 
> I know that it is possible to call:
>myName(myclass) <- "newname"
> 
> However, I want to replace the value of slot "myname" for object "myclass"
> in method "mymethod":
>mymethod(myclass,  myname="newname")
> 
> Sorrowly, the above code in method "mymethod" does not work.
> 
> Is there a possibility to change the value of a slot in the method of a 
> class?

Yes, but to make the effect persistent (visible might be a more
accurate description) that method must return the
object being updated, and you must use the return value.  R uses call
by value semantics, so in the definition of mymethod.MyClass when you
change object you only change a local copy.  It needs to be
 "mymethod.MyClass" <-
function(object, myname=character(0), ...) {
[EMAIL PROTECTED] <- myname;
object
  }

Further, if you invoke it with
mymethod(myclass, "new name")
you will discover myclass is unchanged.  You need
myclass <- mymethod(myclass, "new name")

You might consider using the R.oo package, which probably has
semantics closer to what you're expecting.  Alternately, you could
study more about R and functional programming.

Ross Boylan

P.S. Regarding the follow up saying that this is the wrong list, the
guide to mailing lists says of R-devel
"This list is intended for questions and discussion about code
development in R. Questions likely to prompt discussion unintelligible
to non-programmers or topics that are too technical for R-help's
audience should go to R-devel,"
The question seems to fall under this description to me, though I am
not authoritative.  It is true that further study would have disclosed
what is going on.  Since the same thing tripped me up too, I thought
I'd share the answer.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rmpi and OpenMPI ?

2007-03-30 Thread Ross Boylan
On Fri, Mar 30, 2007 at 03:01:19PM -0500, Dirk Eddelbuettel wrote:
> 
> On 30 March 2007 at 12:48, Ei-ji Nakama wrote:
> | Prof. Nakano(ism Japan) and I wrestled in Rmpi on HP-MPI.
> | Do not know a method to distinguish MPI well?
> | It is an ad-hoc patch at that time as follows.

There are some autoconf snippets for figuring out how to compile
various MPI versions; it's not clear to me they are much help in
figuring out which version you've got.  Perhaps they are some help:
http://autoconf-archive.cryp.to/ax_openmp.html
http://autoconf-archive.cryp.to/acx_mpi.html

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] future plans for missing() in inner functions

2007-04-03 Thread Ross Boylan
Currently, if one wants to test if an argument to an outer function is
missing from within an inner function, this works:
> g5 <- function(a) {
+   inner <- function(a) {
+ if (missing(a))
+   "outer arg is missing"
+ else
+   "found outer arg!"
+   }
+   inner(a)
+ }
> g5(3)
[1] "found outer arg!"
> g5()
[1] "outer arg is missing"

While if inner is defined as function() (without arguments) one gets
an error: 'missing' can only be used for arguments.

However, ?missing contains a note that this behavior is subject to
change.

I'm particularly interested in whether the code shown above will
continue to work.  While it does what I want in this case, the
behavior seems a bit surprising since textually the call to inner does
provide an argument.  So it seems possible that might change.

Can anyone provide more insight into how things may change?

Thanks.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] undefined symbol: Rf_rownamesgets

2007-04-17 Thread Ross Boylan
I get the error
 undefined symbol: Rf_rownamesgets
when I try to load my package, which include C++ code that calls that
function.  This is particularly strange since the code also calls
Rf_classgets, and it loaded OK with just that.

Can anyone tell me what's going on?  

For the record, I worked around this with the general purpose
attribute setting commands and R_RowNamesSymbol.  I discovered that
even with that I wasn't constructing a valid data.frame, and fell back
to returning a list of results.

I notice Rinternals.h defines
LibExtern SEXP  R_RowNamesSymbol;   /* "row.names" */
twice in the same block of code.

I'm using R 2.4.1 on Debian. The symbol seems to be there:
$ nm -D /usr/lib/R/lib/libR.so | grep classgets
00032e70 T Rf_classgets
$ nm -D /usr/lib/R/lib/libR.so | grep namesgets
00031370 T Rf_dimnamesgets
00034500 T Rf_namesgets

The source includes
#define R_NO_REMAP 1
#include 
#include 
and later
#include   // I think this is why I needed R_NO_REMAP

I realize this is not a complete example, but I'm hoping this will
ring a bell with someone.  I encountered this while running
R CMD check.  The link line generated was
g++ -shared  -o mspath.so AbstractTimeStepsGenerator.o Coefficients.o 
CompositeHistoryComputer.o CompressedTimeStepsGenerator.o Covariates.o Data.o 
Environment.o Evaluator.o FixedTimeStepsGenerator.o LinearProduct.o Manager.o 
Model.o ModelBuilder.o Path.o PathGenerator.o PrimitiveHistoryComputer.o 
SimpleRecorder.o Specification.o StateTimeClassifier.o SuccessorGenerator.o 
TimePoint.o TimeStepsGenerator.o mspath.o mspathR.o   -L/usr/lib/R/lib -lR

Thanks.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] undefined symbol: Rf_rownamesgets

2007-04-17 Thread Ross Boylan
On Tue, Apr 17, 2007 at 11:07:12PM -0400, Duncan Murdoch wrote:
> On 4/17/2007 10:43 PM, Ross Boylan wrote:
> >I get the error
> > undefined symbol: Rf_rownamesgets
> >when I try to load my package, which include C++ code that calls that
> >function.  This is particularly strange since the code also calls
> >Rf_classgets, and it loaded OK with just that.
> >
> >Can anyone tell me what's going on?  
> >
> >For the record, I worked around this with the general purpose
> >attribute setting commands and R_RowNamesSymbol.  I discovered that
> >even with that I wasn't constructing a valid data.frame, and fell back
> >to returning a list of results.
> >
> >I notice Rinternals.h defines
> >LibExtern SEXP   R_RowNamesSymbol;   /* "row.names" */
> >twice in the same block of code.
> >
> >I'm using R 2.4.1 on Debian. The symbol seems to be there:
> >$ nm -D /usr/lib/R/lib/libR.so | grep classgets
> >00032e70 T Rf_classgets
> >$ nm -D /usr/lib/R/lib/libR.so | grep namesgets
> >00031370 T Rf_dimnamesgets
> >00034500 T Rf_namesgets
> 
> I don't see Rf_rownamesgets there, or in the R Externals manual among 
> the API entry points listed.  
You're right; sorry.  So does this function just not exist?  If so,
it would be good to remove the corresponding entries in Rinternals.h.

>Can't you use the documented dimnamesgets?
I did one better and didn't use anything!  I thought presence in
Rinternals.h constituted (terse) documentation, since the R Externals
manual says ("Handling R objects in C")
---
   There are two approaches that can be taken to handling R objects from
within C code.  The first (historically) is to use the macros and
functions that have been used to implement the core parts of R through
`.Internal' calls.  A public subset of these is defined in the header
file `Rinternals.h' ...
-

So is relying on Rinternals.h a bad idea?

In this case, accessing the row names through dimnamesgets looks a
little awkward, since it requires navigating to the right spot in
dimnames.  I would need Rf_dimnamesgets since I disabled the shortcut
names.

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Reported invalid memory references

2007-06-13 Thread Ross Boylan
While testing for leaks in my own code I noticed some reported memory
problems from valgrind, invoked with 
$ R --vanilla -d "valgrind --leak-check=full"

This is on Debian GNU/Linux (testing aka lenny) with a 2.6 kernel, R
package version 2.4.1-2.  I was running in an emacs shell.

The immediate source of all the problems before I get to the prompt is
the system dynamic loader ld-2.5.so, invoked from R.  Then, when I exit,
there are a bunch of reported leaks, some of which appear to be more
directly from R (though some involve, e.g., readline).

Are these reported errors actually problems?  If so, do they indicate
problems in R or some other component (e.g., ld.so).  Put more
practically, should I file one or more bugs, and if so, against what?

Thanks.
Ross Boylan

==30551== Invalid read of size 4
==30551==at 0x4016503: (within /lib/ld-2.5.so)
==30551==by 0x4006009: (within /lib/ld-2.5.so)
==30551==by 0x40084F5: (within /lib/ld-2.5.so)
==30551==by 0x40121D4: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4414494: __libc_dlopen_mode
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF73E: __nss_lookup_function
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF82F: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43F1595: __nss_passwd_lookup
(in /lib/i686/cmov/libc-2.5.so)
==30551==  Address 0x4EFB560 is 32 bytes inside a block of size 34
alloc'd
==30551==at 0x40234B0: malloc (vg_replace_malloc.c:149)
==30551==by 0x4008AF3: (within /lib/ld-2.5.so)
==30551==by 0x40121D4: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4414494: __libc_dlopen_mode
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF73E: __nss_lookup_function
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF82F: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43F1595: __nss_passwd_lookup
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x439D87D: getpwuid_r (in /lib/i686/cmov/libc-2.5.so)
==30551== 
==30551== Invalid read of size 4
==30551==at 0x4016530: (within /lib/ld-2.5.so)
==30551==by 0x4006009: (within /lib/ld-2.5.so)
==30551==by 0x40084F5: (within /lib/ld-2.5.so)
==30551==by 0x400C616: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x400CBDA: (within /lib/ld-2.5.so)
==30551==by 0x4012234: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4414494: __libc_dlopen_mode
(in /lib/i686/cmov/libc-2.5.so)
==30551==  Address 0x4EFB8A8 is 24 bytes inside a block of size 27
alloc'd
==30551==at 0x40234B0: malloc (vg_replace_malloc.c:149)
==30551==by 0x4008AF3: (within /lib/ld-2.5.so)
==30551==by 0x400C616: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x400CBDA: (within /lib/ld-2.5.so)
==30551==by 0x4012234: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4414494: __libc_dlopen_mode
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF73E: __nss_lookup_function
(in /lib/i686/cmov/libc-2.5.so)
==30551== 
==30551== Conditional jump or move depends on uninitialised value(s)
==30551==at 0x400B3CC: (within /lib/ld-2.5.so)
==30551==by 0x401230B: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4414494: __libc_dlopen_mode
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF73E: __nss_lookup_function
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43EF82F: (within /lib/i686/cmov/libc-2.5.so)
==30551==by 0x43F1595: __nss_passwd_lookup
(in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x439D87D: getpwuid_r (in /lib/i686/cmov/libc-2.5.so)
==30551==by 0x439D187: getpwuid (in /lib/i686/cmov/libc-2.5.so)
==30551== 
==30551== Conditional jump or move depends on uninitialised value(s)
==30551==at 0x400B0CA: (within /lib/ld-2.5.so)
==30551==by 0x401230B: (within /lib/ld-2.5.so)
==30551==by 0x400E255: (within /lib/ld-2.5.so)
==30551==by 0x4011C5D: (within /lib/ld-2.5.so)
==30551==by 0x44142E1: (within /lib/i

Re: [Rd] Using R_MakeExternalPtr

2007-07-25 Thread Ross Boylan
See at bottom for an example.
On Wed, 2007-07-25 at 11:26 -0700, Jonathan Zhou wrote:
> Hi Hin-Tak,
> 
> Here is the R code function in where I called the two C++ and further below
> are the 2 C++ functions I used to create the externalptr and use it :
> 
> soam.Rapply <- function (x, func, ...,
>   join.method=cbind,
>   njobs,
>   batch.size=100,
>   packages=NULL,
>   savelist=NULL)
> {
>if(missing(njobs))
>njobs <- max(1,ceiling(nrow(x)/batch.size))
> 
>if(!is.matrix(x) && !is.data.frame(x))
>stop("x must be a matrix or data frame")
> 
>if(njobs>1)
>{rowSet <- lapply(splitIndices(nrow(x), njobs), function(i) x[i, ,
> drop = FALSE])} else {rowSet <- list(x)}
> 
>sesCon <- .Call("soamInit")
> 
>script <- " "
> 
>fname <- tempfile(pattern = "Rsoam_data", tmpdir = getwd())
>file(fname, open="w+")
>if(!is.null(savelist)) {
>dump(savelist, fname)
>script<-readLines(fname)
>}
> 
>if(!is.null(packages))
>for(counter in 1:length(packages))
>{
>temp<-call("library", packages[counter], character.only=TRUE)
>dput(temp, fname)
>pack.call<-readLines(fname)
>script<-append(script, pack.call)
>}
> 
>for(counter in 1:njobs)
>{
>caller <- paste("caller", counter, sep = "")
>soam.call<-call("dput", call("apply", X=rowSet[[counter]], MARGIN=1,
> FUN=func), caller)
>dput(soam.call, fname)
>soam.call<-readLines(fname)
> 
>temp<-append(script, soam.call)
>final.script = temp[1]
>for(count in 2:length(temp)){
>final.script<-paste(final.script, temp[count], "\n")}
> 
>.Call("soamSubmit", counter, sesCon, final.script, packages)
>}
> 
>.Call("soamGetResults", sesCon, njobs, join.method, parent.frame())
> 
>for(job in 1:njobs)
>{
>caller <- paste("result", job, sep = "")
>temp = dget(caller)
>if(job==1) {retval=temp} else {retval=join.method(retval,temp)}
>}
> 
>.Call("soamUninit")
> 
>retval
> }
> 
> *** Here are the 2 C++ functions:
> 
> extern "C"
> {
> SEXP soamInit ()
> {
>// Initialize the API
>SoamFactory::initialize();
> 
>// Set up application specific information to be supplied to Symphony
>char appName[] = "SampleAppCPP";
> 
>// Set up application authentication information using the default
> security provider
>DefaultSecurityCallback securityCB("Guest", "Guest");
> 
>// Connect to the specified application
>ConnectionPtr conPtr = SoamFactory::connect(appName, &securityCB);
> 
>// Set up session creation attributes
>SessionCreationAttributes attributes;
>attributes.setSessionName("mySession");
>attributes.setSessionType("ShortRunningTasks");
>attributes.setSessionFlags(SF_RECEIVE_SYNC);
> 
>// Create a synchronous session
>Session* sesPtr = conPtr->createSession(attributes);
// I use Rf_protect, though I'd be surprised if that matters given your
use
> 
>SEXP out = R_MakeExternalPtr((void*)temp, R_NilValue, R_NilValue);
> 
// temp?  don't you mean sesPtr?
>return out;
> }
> }
> 
> extern "C"
> {
>  void soamSubmit   (SEXP jobID,//job ID
> SEXP sesCon,   //session pointer
> SEXP caller,   //objects
> SEXP pack) //packages
> {
>char* savelist = CHAR(STRING_ELT(caller, 0));
>string strTemp = "";
>int job = INTEGER(jobID)[0];
> 
>void* temp = R_ExternalPtrAddr(sesCon);
>Session* sesPtr = reinterpret_cast(temp);
> 
>// Create a message
>MyMessage inMsg(job, /*pack,*/ savelist);
> 
>// Send it
>TaskInputHandlePtr input = sesPtr->sendTaskInput(&inMsg);
> }
> }

I've been able to get things working with this pattern (which also is
about assuring memory is freed)
Here's the pattern:
// I needed R_NO_REMAP to avoid name collisions.  You may not.
#define R_NO_REMAP 1
#include 
#include 

extern "C" {
// returns an |ExternalPtr|
SEXP makeManager(
@);


// user should not need to call
// cleanup
void finalizeManager(SEXP ptr);

}

SEXP makeManager(
@){
//  stuff

Manager* pmanager = new Manager(pd, pm.release(), 
*INTEGER(stepNumerator), *INTEGER(stepDenominator),
(*INTEGER(isexact)) != 0);

// one example didn't use |PROTECT()|
SEXP ptr;
Rf_protect(ptr = R_MakeExternalPtr(pmanager, R_NilValue,
R_NilValue));
R_RegisterCFinalizer(ptr, (R_CFinalizer_t) finalizeManager);
Rf_unprotect(1);
return ptr;

}

void finalizeManager(SEXP ptr){
  Manager *pmanager = static_cast(R_ExternalPtrAddr(ptr));
  delete pmanager;
  R_ClearExternalPtr(ptr);
}

I'd love to hear from those more knowledgeable about whether I did
that right, and whether the F

[Rd] codetools really optional for R CMD check?

2007-07-25 Thread Ross Boylan
After upgrading to R 2.5.1 on Debian, R CMD check gives
* checking Rd cross-references ... WARNING
Error in .find.package(package, lib.loc) : 
there is no package called 'codetools'
Execution halted
* checking for missing documentation entries ... WARNING
etc

The NEWS file says (for 2.5.0; I was on 2.4 before the recent upgrade)
o   New recommended package 'codetools' by Luke Tierney provides
code-analysis tools.  This can optionally be used by 'R CMD
check' to detect problems, especially symbols which are not
visible.

This sounds as if R CMD check should run OK without the package, and it
doesn't seem to.

Have I misunderstood something, or is their a problem with R CMD check's
handling of the case with missing codetools.

I don't have codetools installed because the Debian r-recommended
package was missing a dependency; I see that's already been fixed
(wow!).

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] "could not find function" in R CMD check

2007-09-11 Thread Ross Boylan
During R CMD check I get this:
** building package indices ...
Error in eval(expr, envir, enclos) : could not find function
"readingError"
Execution halted
ERROR: installing package indices failed

The check aborts there.  readingError is a function I just added; for
reference
setClass("readingError", contains="matrix")
readingError <- function(...) new("readingError", matrix(...))
which is in readingError.R in the project's R subdirectory.

Some code in the data directory invokes readingError, and the .Rd file
includes \alias{readingError}, \alias{readingError-class},
\name{readingError-class} and an example invoking readingError.

I'm using R 2.5.1 as packaged for Debian GNU/Linux.

Does anyone have an idea what's going wrong here, or how to fix or debug
it?

The code seems to work OK when I use it from ESS.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] "could not find function" in R CMD check [solved, but is this an R bug?]

2007-09-12 Thread Ross Boylan
On Wed, 2007-09-12 at 11:31 +0200, Uwe Ligges wrote:
> Perhaps Namespace issues? But no further ideas. You might want to make 
> your package available at some URL so that people can look at it and help...
> 
> Uwe Ligges
> 
Thanks.  The problem lay elsewhere.  I was able to fix it by adding 
library(mspath)
to the top of the .R file in data/ that defined some data using the
package's functions  In other words
---
library(mspath)
### from various papers on errors in reading fibrosis scores
rousselet.jr <- readingError(c(.91, .09, 0, 0, 0,
   .11, .78, .11, 0, 0,
   0, .17, .75, .08, 0,
   0, .06, .44, .50, 0,
   0, 0, 0, .07, .93),
 byrow=TRUE, nrow=5, ncol=5)
-
works as data/readingErrorData.R, but without the library() call I  get
the error shown in my original message (see below).  readingError() is a
function defined in my package.

Does any of this inndicate a bug or undesirable feature in R?

First, it seems a little odd that I need to include loading the library
in data that is defined in the same library.  I think I've noticed
similar behavior in other places, maybe the code snippets that accompany
the documentation pages (that is, one needs library(mypackage) in order
for the snippets to check out).

Second, should R CMD check fail so completely and opaquely in this
situation?


> 
> Ross Boylan wrote:
> > During R CMD check I get this:
> > ** building package indices ...
> > Error in eval(expr, envir, enclos) : could not find function
> > "readingError"
> > Execution halted
> > ERROR: installing package indices failed
> > 
> > The check aborts there.  readingError is a function I just added; for
> > reference
> > setClass("readingError", contains="matrix")
> > readingError <- function(...) new("readingError", matrix(...))
> > which is in readingError.R in the project's R subdirectory.
> > 
> > Some code in the data directory invokes readingError, and the .Rd file
> > includes \alias{readingError}, \alias{readingError-class},
> > \name{readingError-class} and an example invoking readingError.
> > 
> > I'm using R 2.5.1 as packaged for Debian GNU/Linux.
> > 
> > Does anyone have an idea what's going wrong here, or how to fix or debug
> > it?
> > 
> > The code seems to work OK when I use it from ESS.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] profiling C code

2006-02-20 Thread Ross Boylan
Does anyone have any advice about profiling C/C++ code in a package
under R?  Does R need to be built specially for this to work?

The FAQ has some entries about profiling but they cover R level
profiling; I'm try to get at the C++ code I've written that is called
from R.

Primary target is Mac OS X.

Thanks.

Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 classes and C

2006-05-18 Thread Ross Boylan
Is there any good source of information on how S4 classes (and methods)
work from C?

E.g., for reading
how to read a slot value
how to invoke a method
how to test if you have an s4 object

For writing, how to make a new instance of an S4 object.

I've found scattered hints in the archive, including a link to a talk on
this subject "I am using C code to create an S4 object based on Douglas
Bates's example in his lecture notes on
<http://www.ci.tuwien.ac.at/Conferences/DSC-2003/Tutorials/RExtensions/slide
s.pdf>"
but the link is no longer good (after putting it all on one line).

"Writing R Extensions" (2.3.0 version) makes no reference to this topic
that I can see.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 classes and C

2006-05-18 Thread Ross Boylan
On Thu, 2006-05-18 at 13:53 -0400, McGehee, Robert wrote:
> I believe the paper on which those lecture notes were based can be found
> here:
> http://www.ci.tuwien.ac.at/Conferences/DSC-2003/Drafts/BatesDebRoy.pdf 
> 
Thank you.  It looks as if it has some useful stuff in it.
Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Recommended style with calculator and persistent data

2006-05-18 Thread Ross Boylan
I have some calculations that require persistent state.  For example,
they retain most of the data across calls with different parameters.
They retain parameters across calls with different subsets of the cases
(this is for distributed computation).  They retain early analysis of
the problem to speed later computations.

I've created an S4 object, and the stylized code looks like this
calc <- makeCalculator(a, b, c)
calc <- setParams(calc, d, e)
calc <- compute(calc)
results <- results(calc)
The client code (such as that above) must remember to do the
assignments, not just invoke the functions.

I notice this does not seem to be the usual style, which is more like
results <- compute(calc)

and possibly using assignment operators like
params(calc) <- x
(actually, I have a call like that, but some of the updates take
multiple arguments).

Another route would be to use lexical scoping to bundle all the
functions together (umm, I'm not sure how that would work with S4
methods) to ensure persistence without requiring assignment by the
client code.  Obviously this would decrease portability to S, but I'm
already using lexical scope a bit.

Is there a recommended R'ish way to approach the problem?  My current
approach works, but I'm concerned it is non-standard, and so would be
unnatural for users.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 classes and C

2006-05-19 Thread Ross Boylan
On Fri, 2006-05-19 at 11:46 +0200, Martin Maechler wrote:
> >>>>> "Seth" == Seth Falcon <[EMAIL PROTECTED]>
> >>>>> on Thu, 18 May 2006 12:22:36 -0700 writes:
> 
> Seth> Ross Boylan <[EMAIL PROTECTED]> writes:
> >> Is there any good source of information on how S4 classes (and methods)
> >> work from C?
> 
> Hmm, yes; there's nothing in the "Writing R Extensions" manual,
> and there's not so much in the ``The Green Book''
> (Chambers 1998), which is prominently cited by Doug Bates (and Saikat Debroy)
> in the paper given at DSC 2003 (mentioned earlier in this
> thread, 
> http://www.ci.tuwien.ac.at/Conferences/DSC-2003/Drafts/BatesDebRoy.pdf )

Particularly relevant is Appendix A.6 of the Green book.  The index
isn't particularly helpful finding it.  Of course, Appendix A can't
answer the question of how the R implementation might differ.

For example, A.6 discusses GET_SLOT_OFFSET, which does not appear to be
available in R.

Thanks for the pointers.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 accessors

2006-09-25 Thread Ross Boylan
I have a small S4 class for which I've written a page grouping many of
the accessors and replacement functions together.  I would be interested
in people comments on the approach I've taken.

The code has a couple of decisions for which I could imagine
alternatives.  First, even simple get/set operations on class elements
are wrapped in functions.  I suppose I could just use [EMAIL PROTECTED] to
do some of these operations, though that is considered bad style in more
traditional OO contexts.

Second, even though the functions are tied to the class, I've defined
them as free functions rather than methods.  I suppose I could create a
generic that would reject most arguments, and then make methods
appropriately.

For the documentation, I've created a single page that groups many of
the functions together.  This is a bit awkward, since the return values
are necessarily the same.  Things are worse for replacement functions;
as I understand it, they must use "value" for their final argument, but
the value has different meanings and types in different contexts.

Any suggestions or comments?

I've attached the .Rd file in case more specifics would help.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062
\name{runTime-accessors}
\alias{startTime}
\alias{endTime}
\alias{wallTime}
\alias{waitTime}
\alias{cpuTime}
\alias{mpirank}
\alias{mpirank<-}
\alias{remoteTime<-}
\title{ Accessors for runTime class}
\description{
  Set and get runTime related information.
}
\usage{
startTime(runTime)
endTime(runTime)
wallTime(runTime)
waitTime(runTime)
cpuTime(runTime)
mpirank(runTime)

mpirank(runTime) <- value
remoteTime(runTime) <- value
}
%- maybe also 'usage' for other objects documented here.
\arguments{
  \item{runTime}{a \code{runTime} object}
  \item{value}{for \code{mpirank}, the MPI rank of the associated job.
For \code{remoteTime}, a vector of statistics from the remote processor: 
user
cpu, system cpu, wall clock time for main job, wall clock time
waiting for the root process.}
}
\details{
  All times are measured from start of job.  The sequence of events is
  that the job is created locally, started remotely, finished remotely,
  and completed locally.  Scheduling and transmission delays may occur.
  
  \item{startTime}{When the job was created, locally.}
  \item{endTime}{When job finished locally.}
  \item{wallTime}{How many seconds between local start and completion.}
  \item{cpuTime}{Remote cpu seconds used, both system and user.}
  \item{waitTime}{Remote seconds waiting for response from the local
system after the remote computation finished.}
  \item{mpirank}{The rank of the execution unit that handled the remote 
computation.}
}
\value{
  Generally seconds, at a system-dependent resolution.
  \code{mpirank} is an integer.
  Replacement functions return the \code{runTime} object itself.
}
\author{Ross Boylan}
\note{Clients that use replacement functions should respect the semantics above.
}
\seealso{\code{\link{runTime-class}}}
\keyword{programming}
\keyword{environment}
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 accessors (corrected)

2006-09-25 Thread Ross Boylan
On Tue, 2006-09-26 at 00:20 +, Ross Boylan wrote:
> I have a small S4 class for which I've written a page grouping many of
> the accessors and replacement functions together.  I would be interested
> in people comments on the approach I've taken.
> 
> The code has a couple of decisions for which I could imagine
> alternatives.  First, even simple get/set operations on class elements
> are wrapped in functions.  I suppose I could just use [EMAIL PROTECTED] to
> do some of these operations, though that is considered bad style in more
> traditional OO contexts.
> 
> Second, even though the functions are tied to the class, I've defined
> them as free functions rather than methods.  I suppose I could create a
> generic that would reject most arguments, and then make methods
> appropriately.
> 
> For the documentation, I've created a single page that groups many of
> the functions together.  This is a bit awkward, since the return values
> are 
NOT
> necessarily the same.  Things are worse for replacement functions;
> as I understand it, they must use "value" for their final argument, but
> the value has different meanings and types in different contexts.
> 
> Any suggestions or comments?
> 
> I've attached the .Rd file in case more specifics would help.

Sorry!

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 accessors

2006-09-26 Thread Ross Boylan
On Tue, 2006-09-26 at 10:43 -0700, Seth Falcon wrote:
> Ross Boylan <[EMAIL PROTECTED]> writes:

> >> If anyone else is going to extend your classes, then you are doing
> >> them a disservice by not making these proper methods.  It means that
> >> you can control what happens when they are called on a subclass. 
> 
> > My style has been to define a function, and then use setMethod if I want
> > to redefine it for an extension.  That way the original version becomes
> > the generic.
> >
> > So I don't see what I'm doing as being a barrier to adding methods.  Am
> > I missing something?
> 
> You are not, but someone else might be: suppose you release your code
> and I would like to extend it.  I am stuck until you decide to make
> generics.
This may be easier to do concretely.
I have an S4 class A.
I have defined a function foo that only operates on that class.
You make a class B that extends A.
You wish to give foo a different implementation for B.

Does anything prevent you from doing 
setMethod("foo", "B", function(x) blah blah)
(which is the same thing I do when I make a subclass)?
This turns my original foo into the catchall method.

Of course, foo is not appropriate for random objects, but that was true
even when it was a regular function.

> 
> > Originally I tried defining the original using setMethod, but this
> > generates a complaint about a missing function; that's one reason I fell
> > into this style.
> 
> You have to create the generic first if it doesn't already exist:
> 
>setGeneric("foo", function(x) standardGeneric("foo"))
I wonder if it might be worth changing setMethod so that it does this by
default when no existing function exists. Personally, that would fit the
style I'm using better.
> 
> >> For accessors, I like to document them in the methods section of the
> >> class documentation.
> 
> > This is for accessors that really are methods, not my fake
> > function-based accessors, right?
> 
> Which might be a further argument not to have the distinction in the
> first place ;-)
> 
> To me, simple accessors are best documented with the class.  If I have
> an instance, I will read help on it and find out what I can do with
> it.  
> 
> > If you use foo as an accessor method, where do you define the associated
> > function (i.e., \alias{foo})? I believe such a definition is expected by
> > R CMD check and is desirable for users looking for help on foo (?foo)
> > without paying attention to the fact it's a method.
> 
> Yes you need an alias for the _generic_ function.  You can either add
> the alias to the class man page where one of its methods is documented
> or you can have separate man pages for the generics.  This is
> painful.  S4 documentation, in general, is rather difficult and IMO
> this is in part a consequence of the more general (read more powerful)
> generic function based system.
As my message indicates, I too am struggling with an appropriate
documentation style for S4 classes and methods.  Since "Writing R
Extensions" has said "Structure of and special markup for documenting S4
classes and methods are still under development." for as long as I cam
remember, perhaps I'm not the only one.

Some of the problem may reflect the tension between conventional OO and
functional languages, since R remains the latter even under S4.  I'm not
sure if it's the tools or my approach that is making things awkward; it
could be both!

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 accessors

2006-09-27 Thread Ross Boylan
I'm trying to understand what the underlying issues are here--with the
immediate goal of how that affects my design and documentation
decisions.

On Wed, Sep 27, 2006 at 02:08:34PM -0400, John Chambers wrote:
> Seth Falcon wrote:
> > John Chambers <[EMAIL PROTECTED]> writes:
> >
> >   
> >> There is a point that needs to be remembered in discussions of
> >> accessor functions (and more generally).
> >>
> >> We're working with a class/method mechanism in a _functional_
> >> language.  Simple analogies made from class-based languages such as
> >> Java are not always good guides.
> >>
> >> In the example below, "a function foo that only operates on that
> >> class" is not usually a meaningful concept in R.   

The sense of "meaningful" here is hard for me to pin down, even with
the subsequent discussion.

I think the import is more than formal: R is not strongly typed, so
you can hand any argument to any function and the language will not
complain.

> >> 
> >
> > If foo is a generic and the only method defined is for class Bar, then
> > the statement seems meaningful enough?
> >   
> 
> This is not primarily a question about implementation but about what the 
> user understands.   IMO, a function should have an intuitive meaning to 
> the user.  Its name is taking up a "global" place in the user's brain, 
> and good software design says not to overload users with too many 
> arbitrary names to remember.

It's true that clashing uses of the same name may lead to confusion,
but that need not imply that functions must be applicable to all
objects.  Many functions only make sense in particular contexts, and
sometimes those contexts are quite narrow.

One of the usual motivations for an OO approach is precisely to limit
the amount of global space taken up by, for example, functions that
operate on the class (global in both the syntactic sense and in the
inside your brain sense).  Understanding a traditional OO system, at
least for me, is fundamentally oriented to understanding the objects
first, with the operations on them as auxiliaries.  As you point out,
this is just different from the orientation of a functional language,
which starts with the functions.

> 
> To be a bit facetious, if "flag is a slot in class Bar, it's really not 
> a good idea to define the accessor for that slot as
>  flag <- function(object)[EMAIL PROTECTED]
> 
> Nor is the situation much improved by having flag() be a generic, with 
> the only method being for class Bar.  We're absconding with a word that 
> users might think has a general meaning.  OK, if need be we will have 
> different flag() functions in different packages that have _different_ 
> intuitive interpretations, but it seems to me that we should try to 
> avoid that problem when we can.
> 
> OTOH, it's not such an imposition to have accessor functions with a 
> syntax that includes the name of the slot in a standardized way:
>   get_flag(object)
> (I don't have any special attachment to this convention, it's just there 
> for an example)

I don't see why get_flag differs from flag; if "flag" lends itself to
multiple interpretations or meanings, wouldn't "get_flag" have the
same problem?

Or are you referring to the fact that "flag" sounds as if it's a verb
or action?  That's a significant ambiguity, but there's nothing about
it that is specific to a functional approach.

>   
> >   
> >> Functions are first-class objects and in principle every function
> >> should have a "function", a purpose.  Methods implement that purpose
> >> for particular combinations of arguments.
> >>

If this is a claim that every function should make sense for every
object, it's asking too much.  If it's not, I don't really see how a
function can avoid having a purpose.  The purpose of accessor
functions is to get or set the state of the object.

> >> Accessor functions are therefore a bit anomalous.  
> >> 
> >
> > How?  A given accessor function has the purpose of returning the
> > expected data "contained" in an instance.  It provides an abstract
> > interface that decouples the structure of the class from the data it
> > needs to provide to users.
> >   
> 
> See above.  That's true _if_ the name or some other syntactic sugar 
> makes it clear the this is indeed an accessor function, but not otherwise.

Aside from the fact that I don't see why get_flag is so different from
flag, the syntactic sugar argument has another problem.  The usually
conceived purpose of accessors is to hide from the client the
internals of the object.  To take an example that's pretty close to
one of my classes, I want startTime, endTime, and duration.
Internally, the object only needs to hold 2 of these quantities to get
the 3rd, but I don't want the client code to be aware of which choice
I made.  In particular, I don't what the client code to change from 
duration to get_duration if I switch to a representation that stored
the duration as a slot.

Ross

__
R-deve

[Rd] documenation duplication and proposed automatic tools

2006-10-02 Thread Ross Boylan
I've been looking at documenting S4 classes and methods, though I have a
feeling many of these issues apply to S3 as well.

My impression is that the documentation system requires or recommends
creating basically the same information in several places.  I'd like to
explain that, see if I'm correct, and suggest that a more automated
framework might make life easier.

PROBLEM

Consider a class A and a method foo that operates on A.  As I understand
it, I must document the generic function foo (or ?foo will not produce a
response) and the method foo (or methods ? foo will not produce a
response).  Additionally, it appears to be recommended that I document
foo in the Methods section of the documentation for class A.  Finally, I
may want to document the method foo with specific arguments
(particularly if if uses "unusual" arguments, but presumably also if the
semantics are different in a class that extends A).

This seems like a lot of work to me, and it also seems error prone  and
subject to synchronization errors.  R CMD check checks vanilla function
documentation for agreement with the code, but I'm not sure that method
documentation in other contexts gets much help.

To complete the picture, suppose there is a another function, "bar",
that operates on A.  B extends A, and reimplements foo, but not bar.

I think the suggestion is that I go back and add the B-flavored method
foo to the general methods documentation for foo.  I also have a choice
whether I should mention bar in the documentation for the class B.  If I
mention it, it's easier for the reader to grasp the whole interface that
B presents.  However, I make it harder to determine which methods
implement new functionality.

SOLUTION

There are a bunch of things users of OO systems typically want to know:
1) the relations between classes
2) the methods implemented by a class (for B, just foo)
3) the interface provided by a class (for B, foo and bar)
4) the various implementations of a particular method

All of these can be discovered dynamically by the user.  The problem is
that current documentation system attempts to reproduce this dynamic
information in static pages.  prompt, promptClass and promptMethods
functions generate templates that contain much of the information (or at
least there supposed to; they seem to miss stuff for me, for example
saying there are no methods when there are methods).  This is helpful,
but has two weaknesses.  First, the class developer must enter very
similar information in multiple places (specifically, function, methods,
and class documentation).  Second, that information is likely to get
dated as the classes are modified and extended.

I think it would be better if the class developer could enter the
information once, and the documentation system assemble it dynamically
when the user asks a question.  For example, if the user asks for
documentation on a class, the resulting page would be contstructed by
pulling together the class description, appropriate method descriptions,
and links to classes the focal class extends (as well, possibly, as
classes that extend it).  Similarly, a request for methods could
assemble a page out of the snippets documenting the individual methods,
including links to the relevant classes.

I realize that implementing this is not trivial, and I'm not necessarily
advocating it as a priority.  But I wonder how it strikes people.

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] \link to another package

2006-10-19 Thread Ross Boylan
In the documentation for my package I would like to reference the Rmpi
documentation.  I started with \link{Rmpi}, which caused R CMD check to
complain that it could not resolve the link.  Since Rmpi wasn't loaded,
this isn't surprising.

Ideally the user would see Rmpi, but the link would go to Rmpi's
Rmpi-pkg.  It's not clear to me if this is possible.  I've combined two
documented features to say \link[Rmpi=Rmpi-pkg]{Rmpi}.  Is that OK?

According to the 2.3.1 documentation
\link[Rmpi]{Rmpi} gets me the Rmpi link in the Rmpi package and
\line[=Rmpi-pkg]{Rmpi} gets me Rmpi-pkg.
There is no mention of combining these two syntaxes.

There is also a discussion of \link[Rmpi:bar]{Rmpi}, but that appears to
refer to a *file* bar.html rather than an internal value (i.e.,
\alias{bar}).

Not only R CMD check but some users of the package may not have Rmpi
loaded, so I'd like the documentation to work gracefully in that
situation.  "Gracefully" means that if Rmpi is not loaded the help still
shows; it does not mean that clicking on the link magically produces the
Rmpi documentation.

-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] tail recursion in R

2006-11-14 Thread Ross Boylan
Apparently Scheme is clever and can turn certain apparently recursive
function calls into into non-recursive evaluations.

Does R do anything like that?  I could find no reference to it in the
language manual.

What I'm wondering is whether there are desirable ways to express
recursion in R.

Thanks.
-- 
Ross Boylan  wk:  (415) 514-8146
185 Berry St #5700   [EMAIL PROTECTED]
Dept of Epidemiology and Biostatistics   fax: (415) 514-8150
University of California, San Francisco
San Francisco, CA 94107-1739 hm:  (415) 550-1062

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Missing values for S4 slots

2006-11-24 Thread Ross Boylan
Using R 2.4, the following fails:
setClass("testc", representation(a="ANY"))
makeC <- function(myarg) new("testc", a=myarg)
makeC()
-> Error in initialize(value, ...) : argument "myarg" is missing,
   with no default

On the other hand, this is OK:
f <- function(a) g(b=a)
g <- function(b) if(missing(b)) "missing" else "valid arg"
g()
-> "missing"

By the way, new("testc") succeeds.

Is there a good way around this?  Since my real example has several 4
arguments that might be missing, testing on missingness and calling
"new" with appropriate explicit arguments is not an attractive option.

Passing all the arguments in is also not an option, since my call to
"new" includes an argument not in the constructor (i.e., my equivalent
of makeC). 

I suspect there's something I could do to get the constructor
arguments, modify the list (i.e., delete args that were missing and
insert new ones), and do.call(new, myArgList).  Not only am I unsure
how to do that (or if it would work), I'm hoping there's a better way.

What's the best approach to this problem?

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Missing values for S4 slots [One Solution]

2006-11-24 Thread Ross Boylan
On Fri, Nov 24, 2006 at 11:23:14AM -0800, Ross Boylan wrote:
> Using R 2.4, the following fails:
> setClass("testc", representation(a="ANY"))
> makeC <- function(myarg) new("testc", a=myarg)
> makeC()
> -> Error in initialize(value, ...) : argument "myarg" is missing,
>with no default

> 
> I suspect there's something I could do to get the constructor
> arguments, modify the list (i.e., delete args that were missing and
> insert new ones), and do.call(new, myArgList).  Not only am I unsure
> how to do that (or if it would work), I'm hoping there's a better way.

I didn't find a way to get all the arguments easily(*), but manually
constructing the list works.  Here are fragments of the code:

mspathCoefficients <- function(
   aMatrix,
   params,
   offset=0,
   baseConstrVec, 
   covLabels # other args omitted
   ) {
  # innerArgs are used with do.call("new, innerArgs)
  innerArgs <- list(
Class = "mspathCoefficients",
aMatrix = aMatrix,
baseConstrVec = as.integer(baseConstrVec),
params = params
)

# the next block inserts the covLabels argument
# only if it is non-missing
  if (missing(covLabels)) {
# 
  } else {
innerArgs$covLabels <- covLabels
  }
#...
  map <- list() 
# fill in the map
# add it to the arguments
  innerArgs$map <- map
  do.call("new", innerArgs)
}

This calls new("mspathCoefficients", ...) with just the non-missing
arguments.  The constructed object has appropriately "missing" values
in the slots (e.g., character(0) given a slot of type "character").


(*) Inside the function, arg <- list(...) will capture the unnamed
arguments, but I don't have any.  as.list(sys.frame(sys.nframe()) is
closer to what I was looking for, though all the values are promises.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Web site link problems (PR#9401)

2006-11-30 Thread Ross Boylan
On Thu, Nov 30, 2006 at 10:59:13AM +0100, Peter Dalgaard wrote:
> [EMAIL PROTECTED] wrote:

> >2. http://www.r-project.org/posting-guide.html includes in the section
> >Surprising behavior and bugs, "make sure you read R Bugs in the R-faq."  
> >The
> >latter is the link http://cran.r-project.org/doc/FAQ/R-FAQ.html#R%20Bugs, 
> >which
> >takes me to the page but not the section.  The link on the FAQ page to that
> >section is http://cran.r-project.org/doc/FAQ/R-FAQ.html#R-Bugs (i.e., no 
> >%20).
> >
> >Desired state: update the link.
> >  
> Yes. (It's only a half-page scroll plus an extra click though...)

The risk is that someone will just conclude it's a bad link and stop
there.

> >You also might want to consider footers on your web pages saying "to report
> >problems with this web page do x".  The pages I looked at didn't have 
> >this
> >info, as far as I can tell.
> >  
> Maybe, if it is easy. The whole bug repository is overdue for 
> replacement, so things that are not critical and/or easy to fix may be 
> left alone... 
> >I hope this is an appropriate place to let you know!
> >  
> It'll do. Don't report other website issues to the bug repository though.
OK.  Where should such reports go?

Ross

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] printing coefficients with text

2006-11-30 Thread Ross Boylan
I want to print the coefficient estimates of a model in a way
as consistent with other output in R as possible. stats provides the
printCoefmat function for doing this, but there is one problem.  I
have an additional piece of textual information I want to put on the
line with the other info on each coefficient.

The documentation for printCoefmat says the first argument must be
numeric, which seems to rule this out.

I just realized I might be able to cheat by inserting the text into
the name of the variable (fortunately there is just one item of
text).  I think that's in the names of the matrix given as the first
argument to the function.

Are there any better solutions?  Obviously I could just copy the
method and modify it, but that creates duplicate code and loses the
ability to track future changes to printCoefmat.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] promptClass misses methods

2006-11-30 Thread Ross Boylan
I've had repeated problems with promptClass missing methods, usually
telling me a class has no methods when it does.

In my current case, I've defined an S4 class "mspathCoefficients" with
a print method
setMethod("print", signature(x="mspathCoefficients"), function(x, ...)
{ # etc

The file promptClass creates has no methods in it.
> showMethods(classes="mspathCoefficients")
Function: initialize (package methods)
.Object="mspathCoefficients"
(inherited from: .Object="ANY")

Function: print (package base)
x="mspathCoefficients"

Function: show (package methods)
object="mspathCoefficients"
(inherited from: object="ANY")

> getGeneric("print")
standardGeneric for "print" defined from package "base"

function (x, ...) 
standardGeneric("print")

Methods may be defined for arguments: x 


I've looked through the code for promptClass, but nothing popped out
at me.

It may be relevant that I'm running under ESS in emacs.  However, I
get the same results running R from the command line.

Can anyone tell me what's going on here?  This is with R 2.4, and I'm
not currently using any namespace for my definitions.

Thanks.
Ross Boylan

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


  1   2   >