Re: [Rd] (PR#10500) Bug#454678: r-base-core: Crash when calling

2007-12-07 Thread ripley
I would say this was user error (insisting on editing non-existent 
rownames), although the argument is documented.  You could argue that 
there are implicit rownames, but they would be 1, 2 ... not row1, row2 
  And rownames(mat) is NULL.

For an interactive function the best solution seems to be to throw an 
error when the user asks for the impossible.

I'll fix it for 2.7.0: it certainly isn't 'important' as it has gone 
undiscovered for many years, and edit.matrix is itself little used.


BTW, 1:dim(names)[1] is dangerous: it could be 1:0.  That was the 
motivation for seq_len.


On Fri, 7 Dec 2007, [EMAIL PROTECTED] wrote:

>
> Ben,
>
> Thanks for the bug report. I am off two minds about it as discussed below.
> But as it does indeed create a crash / segfault, I am passing this on to the
> R bug tracker.  A suggested two-line patch is below; I tested the patch
> against a 'vanilla' 2.6.1 source tree.
>
> On 6 December 2007 at 19:32, Ben Goodrich wrote:
> | -BEGIN PGP SIGNED MESSAGE-
> | Hash: SHA1
> |
> | Package: r-base-core
> | Version: 2.6.1-1
> | Severity: important
> |
> | Hi Dirk,
> |
> | My strong hunch is that this bug should just be forwarded upstream but
> | it might have something to do with libc6 on Debian. To reproduce it, do
> |
> | args(utils:::edit.matrix)
> | mat <- matrix(rnorm(30), nrow = 10, ncol = 3)
> | edit(mat, edit.row.names = TRUE) #crash
>
> I can confirm that it crashes 2.6.0 and 2.6.1.  I also spent the last little
> while building a (non-stripped) debug version that reveals:
>
> (gdb) where
> #0  0xb7b2ef2c in __gconv_transform_utf8_internal () from 
> /lib/i686/cmov/libc.so.6
> #1  0xb7b89f75 in mbrtowc () from /lib/i686/cmov/libc.so.6
> #2  0xb7db05e3 in Rstrwid (str=0x8052010 "\020!\005\b\002", slen=134595712,
>quote=0) at printutils.c:284
> #3  0xb7db0888 in Rstrlen (s=0x8051ff8, quote=0) at printutils.c:377
> #4  0xb7d2de24 in Rf_formatString (x=0x873bbb8, n=1, fieldwidth=0xbfd0fc04,
>quote=0) at format.c:62
> #5  0xb7db12b5 in Rf_EncodeElement (x=0x873ba10, indx=100, quote=0, dec=46 
> '.')
>at printutils.c:576
> #6  0xb754ae0d in get_col_width (DE=0xbfd100b0, col=1) at dataentry.c:804
> #7  0xb754edb4 in initwin (DE=0xbfd100b0, title=0xb755eed9 "R Data Editor")
>at dataentry.c:1986
> #8  0xb7549319 in RX11_dataentry (call=0x89b3fe8, op=0x806c970, 
> args=0x8ba40c8,
>rho=0x89b4bd0) at dataentry.c:382
> #9  0xb7e52771 in do_dataentry (call=0x89b3fe8, op=0x806c970, args=0x8ba40c8,
>rho=0x89b4bd0) at X11.c:91
> #10 0xb7d6045e in do_internal (call=0x89b4020, op=0x8061fa4, args=0x8ba40c8,
>env=0x89b4bd0) at names.c:1120
> #11 0xb7d1f352 in Rf_eval (e=0x89b4020, rho=0x89b4bd0) at eval.c:463
> #12 0xb7d21d5d in do_set (call=0x89b4074, op=0x8060df0, args=0x89b4058,
>rho=0x89b4bd0) at eval.c:1407
> #13 0xb7d1f352 in Rf_eval (e=0x89b4074, rho=0x89b4bd0) at eval.c:463
> #14 0xb7d212b4 in do_begin (call=0x89b2798, op=0x8062458, args=0x89b4090,
>rho=0x89b4bd0) at eval.c:1159
> #15 0xb7d1f352 in Rf_eval (e=0x89b2798, rho=0x89b4bd0) at eval.c:463
> #16 0xb7d1fb67 in Rf_applyClosure (call=0x89b1c9c, op=0x89b1ba0,
>arglist=0x89b1e24, rho=0x89b1d7c, suppliedenv=0x89b1cd4) at eval.c:669
> #17 0xb7d60a32 in applyMethod (call=0x89b1c9c, op=0x89b1ba0, args=0x89b1e24,
>rho=0x89b1d7c, newrho=0x89b1cd4) at objects.c:126
> #18 0xb7d61223 in Rf_usemethod (generic=0x8069af8 "edit", obj=0x8a87868,
>call=0x89b1e94, args=0x8052110, rho=0x89b1d7c, callrho=0x8073f9c,
>defrho=0x828f2fc, ans=0xbfd10d00) at objects.c:291
> #19 0xb7d61776 in do_usemethod (call=0x89b1e94, op=0x80711b8, args=0x89b1e78,
>env=0x89b1d7c) at objects.c:399
> #20 0xb7d1f352 in Rf_eval (e=0x89b1e94, rho=0x89b1d7c) at eval.c:463
> #21 0xb7d1fb67 in Rf_applyClosure (call=0x89b2230, op=0x89b2150,
>arglist=0x89b1e24, rho=0x8073f9c, suppliedenv=0x8073fb8) at eval.c:669
> #22 0xb7d1f601 in Rf_eval (e=0x89b2230, rho=0x8073f9c) at eval.c:507
> #23 0xb7d4a879 in Rf_ReplIteration (rho=0x8073f9c, savestack=0, browselevel=0,
>state=0xbfd1116c) at main.c:263
> #24 0xb7d4aa61 in R_ReplConsole (rho=0x8073f9c, savestack=0, browselevel=0)
>at main.c:312
> #25 0xb7d4bec7 in run_Rmainloop () at main.c:975
> #26 0xb7d4beee in Rf_mainloop () at main.c:982
> #27 0x08048733 in main (ac=0, av=0x0) at Rmain.c:35
> #28 0xb7b27450 in __libc_start_main () from /lib/i686/cmov/libc.so.6
> #29 0x08048691 in _start ()
> (gdb) up
>
> Now, two comments.
>
> Firstly, we all prefer if R would not crash.  So this may need some fixing.
>
> Secondly, I think you are rather close to bordering on user error.  As your
> snippet shows, you need to invoke args on the non-exported edit.matrix to
> learn about the edit.row.names argument. Moreover, you also know full well
> from looking at this that this will only be true when there actually are
> names set --- and you then proceed to call it when there are none.  Guess
> what:  it blows up.
>
> So we could fix this in a number of places.  Here

Re: [Rd] [R] R CMD Build feature searches or requests

2007-12-07 Thread Prof Brian Ripley
This is clearly an R-devel topic, so I've moved it there.
Please re-read the descriptions of the lists in the posting guide.

On Thu, 6 Dec 2007,Johannes Graumann wrote:

> Hello,
>
> I'm missing two features in "R CMD build":
> 1) Easy building of Windows/zip packaged package version alongside the
> *nix-style *.tar.gz.
> Right now I'm doing a scripted version of
>R CMD build 
>R CMD INSTALL 
>mkdir tmp
>cp -r /usr/local/lib/R/site-library/ tmp/
>cd tmp
>zip -r _.zip 
>mv *.zip ..
>cd ..
>rm -rf tmp
> I was wondering whether it wouldn't be helpfull to others maintaining
> packages not requiring genuine cross-compilation (only containing R code)
> to deal with this via an option to "R CMD build". Something
> like '-zip-package' might do it ...

But you needed to install first, which 'build' does not do by default. 
This seems a rare need that you can script for yourself.

> 2) My scripted solution right now also automatically increments version
> numbers and adjusts dates in /man/-package.Rd and
> /DESCRIPTION, ensuring progressing and continuous package naming.
> Would be nice to have an "R CMD build"-option to take care of that too ...

R CMD build adds a date when the packaging was done, but that need not be 
the date of the package.  For my packages the date is that of the last 
change, but the final packaging for distribution can be much later, at a 
time related to R release dates.  In particular, the version of the 
DESCRIPTION file in the svn archive is the master, not that distributed.

Incrementing version numbers automatically is nigh impossible: some people 
omit -0 for example, so what is the next version after 1.7?  1.8? 1.8-1? 
1.8.1?

> Please let me know what you think or where to find the functionality in case
> I overlooked it.

If you find that enough people support your wish for 1 (and so far I have 
seen no response at all), you could contribute a patch for review and 
possible inclusion in a future version of R.  But not, please, to R-help.


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Duncan Murdoch
On 12/7/2007 8:10 AM, Peter Dalgaard wrote:
> Ben Bolker wrote:
>>   At this point I'd just like to advertise the "bbmle" package
>> (on CRAN) for those who respectfully disagree, as I do, with Peter over
>> this issue.  I have added a data= argument to my version
>> of the function that allows other variables to be passed
>> to the objective function.  It seems to me that this is perfectly
>> in line with the way that other modeling functions in R
>> behave.
>>   
> This is at least cleaner than abusing the "fixed" argument. As you know,
> I have reservations, one of which is that it is not a given that I want
> it to behave just like other modeling functions, e.g. a likelihood
> function might refer to more than one data set, and/or data that are not
> structured in the traditional data frame format. The design needs more
> thought than just adding arguments.

We should allow more general things to be passed as data arguments in 
cases where it makes sense.  For example a list with names or an 
environment would be a reasonable way to pass data that doesn't fit into 
a data frame.

> I still prefer a design based a plain likelihood function. Then we can
> discuss how to construct such a function so that  the data are
> incorporated in a flexible way.  There are many ways to do this, I've
> shown one, here's another:
> 
>> f <- function(lambda) -sum(dpois(x, lambda, log=T))
>> d <- data.frame(x=rpois(1, 12.34))
>> environment(f)<-evalq(environment(),d)

We really need to expand as.environment, so that it can convert data 
frames into environments.  You should be able to say:

environment(f) <- as.environment(d)

and get the same result as

environment(f)<-evalq(environment(),d)

But I'd prefer to avoid the necessity for users to manipulate the 
environment of a function.  I think the pattern

model( f, data=d )

being implemented internally as

environment(f) <- as.environment(d, parent = environment(f))

is very nice and general.  It makes things like cross-validation, 
bootstrapping, etc. conceptually cleaner:  keep the same 
formula/function f, but manipulate the data and see what happens.
It does have problems when d is an environment that already has a 
parent, but I think a reasonable meaning in that case would be to copy 
its contents into a new environment with the new parent set.

Duncan Murdoch


>> mle(f, start=list(lambda=10))
> 
> Call:
> mle(minuslogl = f, start = list(lambda = 10))
> 
> Coefficients:
>  lambda
> 12.3402
> 
> It is not at all an unlikely design to have mle() as a generic function
> which works on many kinds of objects, the default method being
> function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
> function returning (tada!) the negative log likelihood function.
>>   (My version also has a cool formula interface and other
>> bells and whistles, and I would love to get feedback from other
>> useRs about it.)
>>
>>cheers
>> Ben Bolker
>>
>>   
> 
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Friday question: negative zero

2007-12-07 Thread Robin Hankin
Hello everyone



On 1 Sep 2007, at 01:39, Duncan Murdoch wrote:

> The IEEE floating point standard allows for negative zero, but it's  
> hard
> to know that you have one in R.  One reliable test is to take the
> reciprocal.  For example,
>
>> y <- 0
>> 1/y
> [1] Inf
>> y <- -y
>> 1/y
> [1] -Inf
>
> The other day I came across one in complex numbers, and it took me a
> while to figure out that negative zero was what was happening:
>
>> x <- complex(real = -1)
>> x
> [1] -1+0i
>> 1/x
> [1] -1+0i
>> x^(1/3)
> [1] 0.5+0.8660254i
>> (1/x)^(1/3)
> [1] 0.5-0.8660254i
>
> (The imaginary part of 1/x is negative zero.)
>
> As a Friday question:  are there other ways to create and detect
> negative zero in R?
>
> And another somewhat more serious question:  is the behaviour of
> negative zero consistent across platforms?  (The calculations above  
> were
> done in Windows in R-devel.)
>



I have been pondering branch cuts and branch points
for some functions which I am implementing.

In this area, it is very important to know whether one has
+0 or -0.

Take the log() function, where it is sometimes
very important to know whether one is just above the
imaginary axis or just below it:



(i).  Small y

 > y <- 1e-100
 > log(-1 +  1i*y)
[1] 0+3.141593i
 > y <- -y
 > log(-1 +  1i*y)
[1] 0-3.141593i


(ii)  Zero y.


 > y <- 0
 > log(-1 +  1i*y)
[1] 0+3.141593i
 > y <- -y
 > log(-1 +  1i*y)
[1] 0+3.141593i
 >


[ie small imaginary jumps have  a discontinuity, infinitesimal jumps  
don't].

This behaviour is undesirable (IMO): one would like log (-1+0i) to be  
different from log(-1-0i).

Tony Plate's example shows that
even though  y<- 0 ;  identical(y, -y) is TRUE,  one has identical(1/ 
y, 1/(-y)) is FALSE,
  so the sign is not discarded.

My complex function does have a branch cut that follows a portion of  
the negative real axis
but the other cuts follow absurdly complicated implicit equations.

At this point
one needs the IEEE requirement that x=x == +0  [ie not -0] for any  
real x; one then finds that
(s-t) and -(t-s) are numerically equal but not necessarily  
indistinguishable.

One of my earlier questions involved branch cuts for the inverse trig  
functions but
(IIRC) the patch I supplied only tested for the imaginary part being  
 >0; would it be
possible to include information about signed zero in these or other  
functions?






> Duncan Murdoch
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Building packages

2007-12-07 Thread Barry Rowlingson
I've started a new package and I'm trying to work out the best way to do 
it. I'm managing my package source directory with SVN, but "R CMD build" 
likes to dump things in the inst/doc directory when making vignette PDF 
files. I don't want to keep these in SVN (they aren't strictly 
'source'), so it set me thinking.

One of the other projects I work with has an out-of-source build system. 
You make a 'build' directory, run a config system (cmake-based) and then 
'make' does everything in the build directory without touching the 
source tree. Very nice and neat. How much work would it take to have 
something similar for building R packages? At present I've just got some 
svn:ignore settings to stop SVN bothering me.

  I also hit the problem of vignettes needing the package to be 
installed before being able to build them, but not being able to install 
the package because the vignettes wouldn't build without the package 
already being installed. The fix is to build with --no-vignettes, then 
install the package, then build with the vignettes enabled. Seems 
kludgy, plus it means that vignettes are always built with the currently 
installed package and not the currently-being-installed package. So I 
install and do a second pass to get it all right again.

  Or am I doing it wrong?

  Once I get smooth running of R package development and SVN I might 
write it up for R-newsletter - there's a couple of other tricks I've had 
to employ...

Barry

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Ben Bolker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gabor Grothendieck wrote:
> On Dec 7, 2007 8:43 AM, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
>> On 12/7/2007 8:10 AM, Peter Dalgaard wrote:


>>> This is at least cleaner than abusing the "fixed" argument. 

   Agreed.

>>> As you know,
>>> I have reservations, one of which is that it is not a given that I want
>>> it to behave just like other modeling functions, e.g. a likelihood
>>> function might refer to more than one data set, and/or data that are not
>>> structured in the traditional data frame format. The design needs more
>>> thought than just adding arguments.

  Fair enough.

>> We should allow more general things to be passed as data arguments in
>> cases where it makes sense.  For example a list with names or an
>> environment would be a reasonable way to pass data that doesn't fit into
>> a data frame.

  Well, my current design specifies a named list: I *think* (but am not
sure) it works gracefully with a data frame as well.  Hadn't thought of
environments -- I'm aiming this more at a lower-level user to whom that
wouldn't occur.  (But I hope it would be possible to design a system
that would be usable by intermediate users and still useful for experts.)

>>> I still prefer a design based a plain likelihood function. Then we can
>>> discuss how to construct such a function so that  the data are
>>> incorporated in a flexible way.  

   My version still allows a plain likelihood function (I agree that
there will always be situations that are too complicated to encapsulate
as a formula).

>>> There are many ways to do this, I've
>>> shown one, here's another:
>>>
 f <- function(lambda) -sum(dpois(x, lambda, log=T))
 d <- data.frame(x=rpois(1, 12.34))
 environment(f)<-evalq(environment(),d)
>> We really need to expand as.environment, so that it can convert data
>> frames into environments.  You should be able to say:
>>
>> environment(f) <- as.environment(d)
>>
>> and get the same result as
>>
>> environment(f)<-evalq(environment(),d)
>>
>> But I'd prefer to avoid the necessity for users to manipulate the
>> environment of a function.  

HEAR, HEAR.

I think the pattern
>>
>> model( f, data=d )
>>
>> being implemented internally as
>>
>> environment(f) <- as.environment(d, parent = environment(f))
>>
>> is very nice and general.  It makes things like cross-validation,
>> bootstrapping, etc. conceptually cleaner:  keep the same
>> formula/function f, but manipulate the data and see what happens.
>> It does have problems when d is an environment that already has a
>> parent, but I think a reasonable meaning in that case would be to copy
>> its contents into a new environment with the new parent set.
>>

  OK.

>> Duncan Murdoch
> 
> Something close to that is already possible in proto and its cleaner in proto
> since the explicit environment manipulation is unnecessary as it occurs
> implicitly:
> 
> 1. In terms of data frame d from Peter Dalgaard's post the code
> below is similar to my last post but it replaces the explicit
> manipulation of f's environemnt with the creation of proto object
> p on line ###.  That line converts d to an anonymous proto object
> containing the components of d, in this case just x, and then
> creates a child object p which can access x via delegation/inheritance.
> 
> library(proto)
> set.seed(1)
> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> d <- data.frame(x=rpois(100, 12.34))
> p <- proto(as.proto(as.list(d)), f = f) ###
> mle(p[["f"]], start=list(lambda=10))
> 
> 2. Or the ### line could be replaced with the following line
> which places f and the components of d, in this case just x,
> directly into p:
> 
> p <- proto(f = f, envir = as.proto(as.list(d)))
> 
> again avoiding the explicit reset of environment(f) and the evalq.
> 
>>
 mle(f, start=list(lambda=10))
>>> Call:
>>> mle(minuslogl = f, start = list(lambda = 10))
>>>
>>> Coefficients:
>>>  lambda
>>> 12.3402
>>>

 *** I still feel very strongly that end users shouldn't have
to deal with closures, environments, protos, etc. --  I want
mle to LOOK LIKE a standard modeling function if at all possible,
even if it can be used more creatively and flexibly by
those who know how. ***

>>> It is not at all an unlikely design to have mle() as a generic function
>>> which works on many kinds of objects, the default method being
>>> function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
>>> function returning (tada!) the negative log likelihood function.

   Agreed.  This would work for formulas, too.

  Have any of you guys looked at bbmle?  The evaluation stuff is
quite ugly, since I was groping around in the dark.  I would love
to clean it up in a way that made everyone happy (?) with it and
possibly allowed it to be merged back into mle.

   Ben

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWWbpc5UpGjwzenMRApxZAJwLYuW+9beyk

Re: [Rd] Building packages

2007-12-07 Thread Barry Rowlingson
Oleg Sklyar wrote:

> If I am not mistaken R CMD build builds the package temporarily and uses
> that build to build the vignette, so where is the problem? All my
> vignettes build fine on both Linux and Windows and on Windows you
> actually see that running R CMD build --binary builds the source code
> two times - exactly for the above purposes.

  Ah ha. I'm building as a user and so I've been installing into a 
private library: ~/Rlibs. Hence my vignette has had 
library(foo,lib="~/Rlibs"). I was unaware that it would get the 
currently-being-built package in a library of its own! Thanks!

  www.doingitwrong.com

Barry

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] os x crash using rpanel and tcltk (PR#10495)

2007-12-07 Thread Aaron Robotham
The machine in question is a black MacBook, a pretty standard setup
with the rest of the details as listed in my first post (R 2.6.1 OSX
10.4.11). I'll give the back trace a try and let you know the result.

On 06/12/2007, Simon Urbanek <[EMAIL PROTECTED]> wrote:
>
> On Dec 6, 2007, at 9:26 AM, Aaron Robotham wrote:
>
> > I know of gdb but I'm not certain how to use it with Mac OS X's
> > R.app, do you just do something like "gdb open R.app" in the
> > terminal?.
> >
>
> You can attach it once it's running - just type "attach R" in gdb
> while R is runningm then "c", then let it crash and then "bt".
>
> FWIW: I cannot reproduce the problem and I have tried 3 different
> machines... (you didn't even tell us what machine type this is ...).
>
> Cheers,
> Simon
>
>
> > Interestingly I don't get this crash when I launch the X11 version of
> > R through the terminal, so this would suggest the bug in question is
> > to do with the actual Rgui in R.app. Hopefully this information might
> > help to narrow down the problem. Any advice for using gdb on R.app
> > would be appreciated, I couldn't find much in the way of guidance when
> > searching online.
> >
> > thanks
> >
> > Aaron
> >
> > On 05/12/2007, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
> >> [EMAIL PROTECTED] wrote:
> >>> Hello,
> >>> I've recently discovered a persistent issue with rpanel when running
> >>> R.app (2.6.1) on Mac OS X 10.4.11. tcltk and rpanel load without any
> >>> apparent error, and the interactive panels appear to work as
> >>> expected,
> >>> however upon closing the panels rpanel has created I get
> >>> catastrophic
> >>> errors and R crashes completely. For the most part R manages to
> >>> crash
> >>> with dignity and work can be saved, but sometimes it will crash
> >>> straight out. Below is an example of an entire work session (only
> >>> base
> >>> packages loaded) with the crash at the end typical of those
> >>> encountered:
> >>>
> >>>
>  library(tcltk)
> 
> >>> Loading Tcl/Tk interface ... done
> >>>
>  library(rpanel)
> 
> >>> Package `rpanel', version 1.0-4
> >>> type help(rpanel) for summary information
> >>>
>  density.draw <- function(panel) {
> 
> >>> +   plot(density(panel$x, bw = panel$h))
> >>> +   panel
> >>> + }
> >>>
>  panel <- rp.control(x = rnorm(50))
>  rp.slider(panel, h, 0.5, 5, log = TRUE, action = density.draw)
> 
> >>>
> >>> *** caught bus error ***
> >>> address 0x0, cause 'non-existent physical address'
> >>>
> >>> Possible actions:
> >>> 1: abort (with core dump, if enabled)
> >>> 2: normal R exit
> >>> 3: exit R without saving workspace
> >>> 4: exit R saving workspace
> >>>
> >>> All packages that are required are up to date, and I can find no
> >>> evidence of similar issues from searching the mailing lists. Any
> >>> suggestions would be appreciated.
> >>>
> >>>
> >> Can you run this under gdb? A breakpoint in the error handler and a
> >> backtrace could be valuable.
> >>
> >>> Aaron
> >>>
> >>> __
> >>> R-devel@r-project.org mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-devel
> >>>
> >>
> >>
> >> --
> >>   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
> >>  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
> >> (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45)
> >> 35327918
> >> ~~ - ([EMAIL PROTECTED])  FAX: (+45)
> >> 35327907
> >>
> >>
> >>
> >>
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >
> >
>
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Peter Dalgaard
Ben Bolker wrote:
>   At this point I'd just like to advertise the "bbmle" package
> (on CRAN) for those who respectfully disagree, as I do, with Peter over
> this issue.  I have added a data= argument to my version
> of the function that allows other variables to be passed
> to the objective function.  It seems to me that this is perfectly
> in line with the way that other modeling functions in R
> behave.
>   
This is at least cleaner than abusing the "fixed" argument. As you know,
I have reservations, one of which is that it is not a given that I want
it to behave just like other modeling functions, e.g. a likelihood
function might refer to more than one data set, and/or data that are not
structured in the traditional data frame format. The design needs more
thought than just adding arguments.

I still prefer a design based a plain likelihood function. Then we can
discuss how to construct such a function so that  the data are
incorporated in a flexible way.  There are many ways to do this, I've
shown one, here's another:

> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> d <- data.frame(x=rpois(1, 12.34))
> environment(f)<-evalq(environment(),d)
> mle(f, start=list(lambda=10))

Call:
mle(minuslogl = f, start = list(lambda = 10))

Coefficients:
 lambda
12.3402

It is not at all an unlikely design to have mle() as a generic function
which works on many kinds of objects, the default method being
function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
function returning (tada!) the negative log likelihood function.
>   (My version also has a cool formula interface and other
> bells and whistles, and I would love to get feedback from other
> useRs about it.)
>
>cheers
> Ben Bolker
>
>   


-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R installer

2007-12-07 Thread Simon Urbanek

On Dec 4, 2007, at 9:11 PM, Hin-Tak Leung wrote:

> Simon Urbanek wrote:
> 
>> Because it *is* the gcc files? (Note the "/local" in the paths.)  
>> Full R comes with GNU Fortran 4.2.1, because Apple doesn't offer  
>> any Fortran compiler and most other Fortran compiler binaries for  
>> Mac OS X out on the web are not really working well. It installs  
>> in /usr/local.
> 
>
> This is what I don't understand or agree on. The R windows installer  
> does *not* try to install any of mingw gcc or Rtools. Okay, you  
> cannot install source packages on windows without mingw gcc or  
> Rtools, but that's a caveate.
>
> If I were an Apple user (which I am not), there is a chance that I  
> might have my own gcc/gfortran in /usr/local and I surely do not  
> want R to temper with them. If you need runtime libgfortran support,  
> you should just bundle gfortran.so and gcc.so if necesary (there are  
> static alternatives), and put those in R's area.
>

That's exactly what we do. Apparently you didn't bother to read my e- 
mail (the part you "snipped") or to look at the installer. Please do  
your homework before posting wild (and false) speculations.

Cheers,
Simon



> (recently, I took enough trouble of bootstrapping gfortran 4.2.x
> for cross-compiling - see mingw-devel mailing list archive - because  
> mingw don't distribute that as binary. I have win32 R under wine,  
> but I really would *not* appreciate if win32 R tries to do anything  
> substantially more than just put itself in a directory...).
>
> 
>> No. The failure is due to a strange symlink in /usr/local/lib that  
>> points to itself. I suspect that this has something to do with an  
>> upgrade from Tiger to Leopard or Xcode 3 installation and that  
>> Apple actually creates that infinite symlink. Given that there is "/ 
>> usr/loca/lib 1" lingering around, I'd bet that
>> sudo rm /usr/local/lib
>> sudo mv '/usr/local/lib 1' /usr/local/lib
>> will fix the problem.
>
> 
>
> Yeah, that apple box is *so* broken.:-).
>
> HTL
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building packages

2007-12-07 Thread Oleg Sklyar
These files in the SVN tree does not harm the things that are checked
in. However it is indeed reasonable to keep the rubbish out, so:

> I've started a new package and I'm trying to work out the best way to do 
> it. I'm managing my package source directory with SVN, but "R CMD build" 
> likes to dump things in the inst/doc directory when making vignette PDF 
> files. I don't want to keep these in SVN (they aren't strictly 
> 'source'), so it set me thinking.
Solution 1: copy the package SVN dir elsewhere and build/install from
there
Solution 2: a better one, make a 2-liner shell script that runs solution
1 (what I do)

This will also prevent gcc from populating your svn src directory
with .o, .so, .d, .dll files.

> One of the other projects I work with has an out-of-source build system. 
> You make a 'build' directory, run a config system (cmake-based) and then 
> 'make' does everything in the build directory without touching the 
> source tree. Very nice and neat. How much work would it take to have 
> something similar for building R packages? At present I've just got some 
> svn:ignore settings to stop SVN bothering me.
R does understand 'configure' which is more reasonable then to require
cmake to be installed. Think of multiplatform builds etc.

>   I also hit the problem of vignettes needing the package to be 
> installed before being able to build them, but not being able to install 
> the package because the vignettes wouldn't build without the package 
> already being installed. The fix is to build with --no-vignettes, then 
> install the package, then build with the vignettes enabled. Seems 
> kludgy, plus it means that vignettes are always built with the currently 
> installed package and not the currently-being-installed package. So I 
> install and do a second pass to get it all right again.
If I am not mistaken R CMD build builds the package temporarily and uses
that build to build the vignette, so where is the problem? All my
vignettes build fine on both Linux and Windows and on Windows you
actually see that running R CMD build --binary builds the source code
two times - exactly for the above purposes.

> 
>   Or am I doing it wrong?
> 
>   Once I get smooth running of R package development and SVN I might 
> write it up for R-newsletter - there's a couple of other tricks I've had 
> to employ...
What exactly?

> 
> Barry
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
-- 
Dr Oleg Sklyar * EBI-EMBL, Cambridge CB10 1SD, UK * +44-1223-494466

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building packages

2007-12-07 Thread Barry Rowlingson
Gabor Grothendieck wrote:
> An svn checkout directory can contain a mix of files that
> are mirrored in the svn and not mirrored.  In particular, if you
> add a new file into your checkout directory it will not automatically
> go into the repository on your next commit unless you specifically
> place that file under svn control so junk files remain local.

  True, but 'svn status' will keep annoying you with:

? inst/doc/foo.eps

  until you tell it to ignore it ["svn propedit svn:ignore ." and then 
enter some expressions].

Barry

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] (PR#10500) Bug#454678: followup

2007-12-07 Thread goodrich
[I was overlooked on the CC. Hopefully this message does not create a
new bug report.]

> Prof Brian Ripley wrote:
> I would say this was user error (insisting on editing non-existent
> rownames), although the argument is documented.  You could argue that
> there are implicit rownames, but they would be 1, 2 ... not row1, row2
>   And rownames(mat) is NULL.
>
> For an interactive function the best solution seems to be to throw an
> error when the user asks for the impossible.
>
> I'll fix it for 2.7.0: it certainly isn't 'important' as it has gone
> undiscovered for many years, and edit.matrix is itself little used.
>
>
> BTW, 1:dim(names)[1] is dangerous: it could be 1:0.  That was the
> motivation for seq_len.

I would agree that it is a rare user error, but my original mistake was
a little more benign than the one that is depicted in the bug report. I
just forgot to call rownames()<- before calling edit(); that could have
happened to anyone. Perhaps one reason why this issue has not been
reported before is that in 2.5.1 at least, it produces an error message
rather than crashing (which I noted in the original bug report to Debian
but that was a little hard to see by the time it got forwarded to
R-bugs). In some ways, I think throwing an error in 2.7.0 would be
better than the behavior of edit.data.frame() in this case, which is to
add row names of 1, 2, ... . -- Thanks, Ben

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building packages

2007-12-07 Thread Gabor Grothendieck
An svn checkout directory can contain a mix of files that
are mirrored in the svn and not mirrored.  In particular, if you
add a new file into your checkout directory it will not automatically
go into the repository on your next commit unless you specifically
place that file under svn control so junk files remain local.

You can exclude files from R CMD build using the .Rbuildignore file.
See the Writing Extensions manual.

On Dec 7, 2007 11:07 AM, Barry Rowlingson <[EMAIL PROTECTED]> wrote:
> I've started a new package and I'm trying to work out the best way to do
> it. I'm managing my package source directory with SVN, but "R CMD build"
> likes to dump things in the inst/doc directory when making vignette PDF
> files. I don't want to keep these in SVN (they aren't strictly
> 'source'), so it set me thinking.
>
> One of the other projects I work with has an out-of-source build system.
> You make a 'build' directory, run a config system (cmake-based) and then
> 'make' does everything in the build directory without touching the
> source tree. Very nice and neat. How much work would it take to have
> something similar for building R packages? At present I've just got some
> svn:ignore settings to stop SVN bothering me.
>
>  I also hit the problem of vignettes needing the package to be
> installed before being able to build them, but not being able to install
> the package because the vignettes wouldn't build without the package
> already being installed. The fix is to build with --no-vignettes, then
> install the package, then build with the vignettes enabled. Seems
> kludgy, plus it means that vignettes are always built with the currently
> installed package and not the currently-being-installed package. So I
> install and do a second pass to get it all right again.
>
>  Or am I doing it wrong?
>
>  Once I get smooth running of R package development and SVN I might
> write it up for R-newsletter - there's a couple of other tricks I've had
> to employ...
>
> Barry
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Gabor Grothendieck
On Dec 7, 2007 8:43 AM, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 12/7/2007 8:10 AM, Peter Dalgaard wrote:
> > Ben Bolker wrote:
> >>   At this point I'd just like to advertise the "bbmle" package
> >> (on CRAN) for those who respectfully disagree, as I do, with Peter over
> >> this issue.  I have added a data= argument to my version
> >> of the function that allows other variables to be passed
> >> to the objective function.  It seems to me that this is perfectly
> >> in line with the way that other modeling functions in R
> >> behave.
> >>
> > This is at least cleaner than abusing the "fixed" argument. As you know,
> > I have reservations, one of which is that it is not a given that I want
> > it to behave just like other modeling functions, e.g. a likelihood
> > function might refer to more than one data set, and/or data that are not
> > structured in the traditional data frame format. The design needs more
> > thought than just adding arguments.
>
> We should allow more general things to be passed as data arguments in
> cases where it makes sense.  For example a list with names or an
> environment would be a reasonable way to pass data that doesn't fit into
> a data frame.
>
> > I still prefer a design based a plain likelihood function. Then we can
> > discuss how to construct such a function so that  the data are
> > incorporated in a flexible way.  There are many ways to do this, I've
> > shown one, here's another:
> >
> >> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> >> d <- data.frame(x=rpois(1, 12.34))
> >> environment(f)<-evalq(environment(),d)
>
> We really need to expand as.environment, so that it can convert data
> frames into environments.  You should be able to say:
>
> environment(f) <- as.environment(d)
>
> and get the same result as
>
> environment(f)<-evalq(environment(),d)
>
> But I'd prefer to avoid the necessity for users to manipulate the
> environment of a function.  I think the pattern
>
> model( f, data=d )
>
> being implemented internally as
>
> environment(f) <- as.environment(d, parent = environment(f))
>
> is very nice and general.  It makes things like cross-validation,
> bootstrapping, etc. conceptually cleaner:  keep the same
> formula/function f, but manipulate the data and see what happens.
> It does have problems when d is an environment that already has a
> parent, but I think a reasonable meaning in that case would be to copy
> its contents into a new environment with the new parent set.
>
> Duncan Murdoch

Something close to that is already possible in proto and its cleaner in proto
since the explicit environment manipulation is unnecessary as it occurs
implicitly:

1. In terms of data frame d from Peter Dalgaard's post the code
below is similar to my last post but it replaces the explicit
manipulation of f's environemnt with the creation of proto object
p on line ###.  That line converts d to an anonymous proto object
containing the components of d, in this case just x, and then
creates a child object p which can access x via delegation/inheritance.

library(proto)
set.seed(1)
f <- function(lambda) -sum(dpois(x, lambda, log=T))
d <- data.frame(x=rpois(100, 12.34))
p <- proto(as.proto(as.list(d)), f = f) ###
mle(p[["f"]], start=list(lambda=10))

2. Or the ### line could be replaced with the following line
which places f and the components of d, in this case just x,
directly into p:

p <- proto(f = f, envir = as.proto(as.list(d)))

again avoiding the explicit reset of environment(f) and the evalq.

>
>
> >> mle(f, start=list(lambda=10))
> >
> > Call:
> > mle(minuslogl = f, start = list(lambda = 10))
> >
> > Coefficients:
> >  lambda
> > 12.3402
> >
> > It is not at all an unlikely design to have mle() as a generic function
> > which works on many kinds of objects, the default method being
> > function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
> > function returning (tada!) the negative log likelihood function.
> >>   (My version also has a cool formula interface and other
> >> bells and whistles, and I would love to get feedback from other
> >> useRs about it.)
> >>
> >>cheers
> >> Ben Bolker
> >>
> >>
> >
> >
>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Gabor Grothendieck
On Dec 7, 2007 8:10 AM, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
> Ben Bolker wrote:
> >   At this point I'd just like to advertise the "bbmle" package
> > (on CRAN) for those who respectfully disagree, as I do, with Peter over
> > this issue.  I have added a data= argument to my version
> > of the function that allows other variables to be passed
> > to the objective function.  It seems to me that this is perfectly
> > in line with the way that other modeling functions in R
> > behave.
> >
> This is at least cleaner than abusing the "fixed" argument. As you know,
> I have reservations, one of which is that it is not a given that I want
> it to behave just like other modeling functions, e.g. a likelihood
> function might refer to more than one data set, and/or data that are not
> structured in the traditional data frame format. The design needs more
> thought than just adding arguments.
>
> I still prefer a design based a plain likelihood function. Then we can
> discuss how to construct such a function so that  the data are
> incorporated in a flexible way.  There are many ways to do this, I've
> shown one, here's another:
>
> > f <- function(lambda) -sum(dpois(x, lambda, log=T))
> > d <- data.frame(x=rpois(1, 12.34))
> > environment(f)<-evalq(environment(),d)
> > mle(f, start=list(lambda=10))
>
> Call:
> mle(minuslogl = f, start = list(lambda = 10))
>
> Coefficients:
>  lambda
> 12.3402
>

The explicit environment manipulation is what I was referring to but
we can simplify it using proto.  Create a proto object to hold
f and x then pass the f in the proto object (rather than the
original f) to mle.  That works because proto automatically resets
the environment of f when its added to avoiding the evalq.

> set.seed(1)
> library(proto)
> f <- function(lambda) -sum(dpois(x, lambda, log=TRUE))
> p <- proto(f = f, x = rpois(100, 12.34))
> mle(p[["f"]], start = list(lambda = 10))

Call:
mle(minuslogl = p[["f"]], start = list(lambda = 10))

Coefficients:
  lambda
12.46000

> It is not at all an unlikely design to have mle() as a generic function
> which works on many kinds of objects, the default method being
> function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
> function returning (tada!) the negative log likelihood function.
> >   (My version also has a cool formula interface and other
> > bells and whistles, and I would love to get feedback from other
> > useRs about it.)
> >
> >cheers
> > Ben Bolker
> >
> >
>
>
> --
>
>   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
>  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
>  (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
> ~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Building packages

2007-12-07 Thread hadley wickham
On 12/7/07, Barry Rowlingson <[EMAIL PROTECTED]> wrote:
> Gabor Grothendieck wrote:
> > An svn checkout directory can contain a mix of files that
> > are mirrored in the svn and not mirrored.  In particular, if you
> > add a new file into your checkout directory it will not automatically
> > go into the repository on your next commit unless you specifically
> > place that file under svn control so junk files remain local.
>
>   True, but 'svn status' will keep annoying you with:
>
> ? inst/doc/foo.eps
>
>   until you tell it to ignore it ["svn propedit svn:ignore ." and then
> enter some expressions].

Yes, but that's completely normal svn operation - you ignore the non
source files so that they don't interfere with your view of the source
files.  You particularly need this when working with latex.

I have

alias svnignore='svn pe svn:ignore'

in my .profile to save a little typing.

Hadley
-- 
http://had.co.nz/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Ben Bolker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Luke Tierney wrote:
> On Fri, 7 Dec 2007, Duncan Murdoch wrote:
> 
>
> 
> For working at the general likelihood I think is is better to
> encourage the approach of definign likelihood constructor functions.
> The problem with using f, data is that you need to mathc the names
> used in f and in data, so either you have to explicitly write out f
> with the names you have in data or you have to modify data to use the
> names f likes -- in the running example think
> 
> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> d <- data.frame(y=rpois(1, 12.34))
> 
> somebody has to connext up the x in f with the y in d. With a negative
> log likelihood constructor defines, for example, as
> 
> makePoisonNegLogLikelihood <- function(x)
> function(lambda) -sum(dpois(x, lambda, log=T))
> 
> this happens naturally with
> 
> makePoisonNegLogLikelihood(d$y)
> 
> 

  I hate to sound like a jerk, but I do hope that in the end we come
up with a solution that will still be accessible to people who don't
quite have the concept of writing functions to produce functions.  I
feel it is "natural" for people who have multiple data sets to have the
variables named similarly in different data sets.  All of the
constructor stuff is still accessible to anyone who wants to use the
function that way ... is there any way to do a cheesy default
constructor that is just equivalent to taking the likelihood function
and arranging for it to be evaluated in an environment containing
the data?  That way if "nllmaker" below were just a formula
or a log-likelihood function it could still work ...

  [snip]
> Both (simple) bootstrapping and (simple leave-one-out) crossvalidation
> require a data structure with a notion of cases, which is much more
> restrictive than the conext in which mle can be used.  A more ngeneric
> aproach to bootstrapping that might fit closer to the level of
> generality of mle might be parameterized in terms of a negative log
> likelihood constructor, a starting value constructor, and a resampling
> function, with a single iteration implemented soemthing like
> 
> mleboot1 <- function(nllmaker, start, esample)  {
> newdata <- resample()
> newstart <- do.call(start, newdata)
> nllfun <- do.call(nllmaker, newdata)
> mle(fnllfun, start = newstart)
> }
> 
> This would leave decisions on the resampling method and data structure
> up to the user. Somehing similar could be done with K-fold CV.
> 
> luke
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHWcS1c5UpGjwzenMRAig2AJ9iTzhI1p8tBb7Q15jgT4nA+Zds+gCgggc2
sI2que28Hl1M5cVGa+anEL0=
=hCiS
-END PGP SIGNATURE-

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Luke Tierney
On Fri, 7 Dec 2007, Duncan Murdoch wrote:

> On 12/7/2007 8:10 AM, Peter Dalgaard wrote:
>> Ben Bolker wrote:
>>>   At this point I'd just like to advertise the "bbmle" package
>>> (on CRAN) for those who respectfully disagree, as I do, with Peter over
>>> this issue.  I have added a data= argument to my version
>>> of the function that allows other variables to be passed
>>> to the objective function.  It seems to me that this is perfectly
>>> in line with the way that other modeling functions in R
>>> behave.
>>>
>> This is at least cleaner than abusing the "fixed" argument. As you know,
>> I have reservations, one of which is that it is not a given that I want
>> it to behave just like other modeling functions, e.g. a likelihood
>> function might refer to more than one data set, and/or data that are not
>> structured in the traditional data frame format. The design needs more
>> thought than just adding arguments.
>
> We should allow more general things to be passed as data arguments in
> cases where it makes sense.  For example a list with names or an
> environment would be a reasonable way to pass data that doesn't fit into
> a data frame.
>
>> I still prefer a design based a plain likelihood function. Then we can
>> discuss how to construct such a function so that  the data are
>> incorporated in a flexible way.  There are many ways to do this, I've
>> shown one, here's another:
>>
>>> f <- function(lambda) -sum(dpois(x, lambda, log=T))
>>> d <- data.frame(x=rpois(1, 12.34))
>>> environment(f)<-evalq(environment(),d)
>
> We really need to expand as.environment, so that it can convert data
> frames into environments.  You should be able to say:
>
> environment(f) <- as.environment(d)
>
> and get the same result as
>
> environment(f)<-evalq(environment(),d)
>
> But I'd prefer to avoid the necessity for users to manipulate the
> environment of a function.  I think the pattern
>
> model( f, data=d )

For working at the general likelihood I think is is better to
encourage the approach of definign likelihood constructor functions.
The problem with using f, data is that you need to mathc the names
used in f and in data, so either you have to explicitly write out f
with the names you have in data or you have to modify data to use the
names f likes -- in the running example think

 f <- function(lambda) -sum(dpois(x, lambda, log=T))
 d <- data.frame(y=rpois(1, 12.34))

somebody has to connext up the x in f with the y in d. With a negative
log likelihood constructor defines, for example, as

 makePoisonNegLogLikelihood <- function(x)
 function(lambda) -sum(dpois(x, lambda, log=T))

this happens naturally with

 makePoisonNegLogLikelihood(d$y)

>
> being implemented internally as
>
> environment(f) <- as.environment(d, parent = environment(f))
>
> is very nice and general.  It makes things like cross-validation,
> bootstrapping, etc. conceptually cleaner:  keep the same
> formula/function f, but manipulate the data and see what happens.
> It does have problems when d is an environment that already has a
> parent, but I think a reasonable meaning in that case would be to copy
> its contents into a new environment with the new parent set.

Both (simple) bootstrapping and (simple leave-one-out) crossvalidation
require a data structure with a notion of cases, which is much more
restrictive than the conext in which mle can be used.  A more ngeneric
aproach to bootstrapping that might fit closer to the level of
generality of mle might be parameterized in terms of a negative log
likelihood constructor, a starting value constructor, and a resampling
function, with a single iteration implemented soemthing like

 mleboot1 <- function(nllmaker, start, esample)  {
newdata <- resample()
newstart <- do.call(start, newdata)
nllfun <- do.call(nllmaker, newdata)
mle(fnllfun, start = newstart)
 }

This would leave decisions on the resampling method and data structure
up to the user. Somehing similar could be done with K-fold CV.

luke



>
> Duncan Murdoch
>
>
>>> mle(f, start=list(lambda=10))
>>
>> Call:
>> mle(minuslogl = f, start = list(lambda = 10))
>>
>> Coefficients:
>>  lambda
>> 12.3402
>>
>> It is not at all an unlikely design to have mle() as a generic function
>> which works on many kinds of objects, the default method being
>> function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
>> function returning (tada!) the negative log likelihood function.
>>>   (My version also has a cool formula interface and other
>>> bells and whistles, and I would love to get feedback from other
>>> useRs about it.)
>>>
>>>cheers
>>> Ben Bolker
>>>
>>>
>>
>>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

-- 
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone:

[Rd] regression tests for unlink and wildcards fail - Solaris 10 SPARC / Sun Studio 12 (PR#10501)

2007-12-07 Thread brownjtb
Full_Name: Jim Brown
Version: 2.6.0 / 2.6.1
OS: Solaris 10 (SPARC)
Submission from: (NULL) (35.8.15.102)


I have been able to successfully compile version 2.5.1 using the Sun Studio 12
compilers on Sun Solaris 10 (SPARC).  All tests using "make check" pass with a
status of OK.   However, the following section of "reg-tests-1.R" fails when I
attempt to test after a build of either 2.6.0 or 2.6.1:

  ## regression tests for unlink and wildcards
  owd <- setwd(tempdir())
  f <- c("ftest1", "ftest2", "ftestmore", "ftest&more")
  file.create(f)
  stopifnot(file.exists(f))
  unlink("ftest?")
  stopifnot(file.exists(f) == c(FALSE, FALSE, TRUE, TRUE))
  unlink("ftest*", recursive = TRUE)
  stopifnot(!file.exists(f))

  stopifnot(unlink("no_such_file") == 0) # not an error

  dd <- c("dir1", "dir2", "dirs", "moredirs")
  for(d in dd) dir.create(d)
  dir(".")
  file.create(file.path(dd, "somefile"))
  dir(".", recursive=TRUE)
  stopifnot(unlink("dir?") == 1) # not an error
  unlink("dir?", recursive = TRUE)
  stopifnot(file.exists(dd) == c(FALSE, FALSE, FALSE, TRUE))
  unlink("*dir*", recursive = TRUE)
  stopifnot(!file.exists(dd))

  # Windows needs short path names for leading spaces
  dir.create(" test")
  dir(".", recursive=TRUE)
  unlink(" test", recursive = TRUE)
  stopifnot(!file.exists(" test"))
  setwd(owd)


If I comment out the above section of the tests, the rest of the test pass. 
However, running as it is intended, the "reg-tests-1.R" test does fail.  Here is
the output that is generated from "make check":

running code in 'reg-tests-1.R' ...*** Error code 1
The following command caused the error:
LC_ALL=C SRCDIR=. R_DEFAULT_PACKAGES= ../bin/R --vanilla < reg-tests-1.R >
reg-tests-1.Rout 2>&1 || (mv reg-tests-1.Rout reg-tests-1.Rout.fail && exit 1)
make: Fatal error: Command failed for target `reg-tests-1.Rout'
Current working directory /apps/local/src/R-2.6.0/tests
*** Error code 1
The following command caused the error:
make reg-tests-1.Rout reg-tests-2.Rout reg-IO.Rout reg-IO2.Rout  reg-plot.Rout
reg-S4.Rout  RVAL_IF_DIFF=1
make: Fatal error: Command failed for target `test-Reg'
Current working directory /apps/local/src/R-2.6.0/tests
*** Error code 1
The following command caused the error:
for name in Examples Specific Reg Internet; do \
  make test-${name} || exit 1; \
done
make: Fatal error: Command failed for target `test-all-basics'
Current working directory /apps/local/src/R-2.6.0/tests
*** Error code 1
The following command caused the error:
(cd tests && make check)
make: Fatal error: Command failed for target `check'



And here are the final entries in the "reg-tests-1.Rout.fail" file:

> ## regression tests for unlink and wildcards
> owd <- setwd(tempdir())
> f <- c("ftest1", "ftest2", "ftestmore", "ftest&more")
> file.create(f)
[1] TRUE TRUE TRUE TRUE
> stopifnot(file.exists(f))
> unlink("ftest?")
> stopifnot(file.exists(f) == c(FALSE, FALSE, TRUE, TRUE))
> unlink("ftest*", recursive = TRUE)
> stopifnot(!file.exists(f))
> 
> stopifnot(unlink("no_such_file") == 0) # not an error
> 
> dd <- c("dir1", "dir2", "dirs", "moredirs")
> for(d in dd) dir.create(d)
> dir(".")
[1] "41c6167e" "dir1" "dir2" "dirs" "file21ed4192"
[6] "file281327c9" "moredirs"
> file.create(file.path(dd, "somefile"))
[1] TRUE TRUE TRUE TRUE
> dir(".", recursive=TRUE)
[1] "41c6167e"  "dir1/somefile" "dir2/somefile"
[4] "dirs/somefile" "file21ed4192"  "file281327c9" 
[7] "moredirs/somefile"
> stopifnot(unlink("dir?") == 1) # not an error
Error: unlink("dir?") == 1 is not TRUE
Execution halted
rm: Cannot remove any directory in the path of the current working directory
/tmp/RtmpBLKy4b



Is this safe to ignore?   The empty directory that it is complaining about
(/tmp/RtmpBLKy4b) can not be removed using "rm -r", but can be removed using the
UNIX "unlink" command.


Again, version 2.5.1 builds and checks just fine, but the above tests fail when
I attempt to build/check either 2.6.0 or 2.6.1.


In case it is any help, here are the configure options that I set for all three
builds:

  ./configure --prefix=/usr/local/R
  --with-blas=sunperf
  --with-lapack
  --with-tcl-config=/usr/local/lib/tclConfig.sh
  --with-tk-config=/usr/local/lib/tkConfig.sh
  R_PAPERSIZE=letter
  CC=/opt/SUNWspro/bin/cc
  CFLAGS="-mt -ftrap=%none -xarch=sparcvis"
  LDFLAGS="-L/usr/local/lib -R/usr/local/lib"
  CXX=/opt/SUNWspro/bin/CC
  CXXFLAGS="-mt -ftrap=%none -xarch=sparcvis"
  F77=/opt/SUNWspro/bin/f95
  F95=/opt/SUNWspro/bin/f95
  FFLAGS="-mt -ftrap=%none -xarch=sparcvis" 
  FC=/opt/SUNWspro/bin/f95
  FCFLAGS="-mt -ftrap=%none -xarch=sparcvis"
  CPICFLAGS=-xcode=pic32
  CPPFLAGS="-I/usr/local/include"
  SHLIB_CXXLDFLAGS="-G -lCstd"



For now, I think I will continue to use version 2.5.1 as I am able build/test
that version and I know it works.  However, I would like to upgrade at some
point and just thought I would make you aware of the failed test on the Sun
Solaris 10 (SPARC) platform 

Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Luke Tierney
On Fri, 7 Dec 2007, Duncan Murdoch wrote:

> On 12/7/2007 8:10 AM, Peter Dalgaard wrote:
>> Ben Bolker wrote:
>>>   At this point I'd just like to advertise the "bbmle" package
>>> (on CRAN) for those who respectfully disagree, as I do, with Peter over
>>> this issue.  I have added a data= argument to my version
>>> of the function that allows other variables to be passed
>>> to the objective function.  It seems to me that this is perfectly
>>> in line with the way that other modeling functions in R
>>> behave.
>>>
>> This is at least cleaner than abusing the "fixed" argument. As you know,
>> I have reservations, one of which is that it is not a given that I want
>> it to behave just like other modeling functions, e.g. a likelihood
>> function might refer to more than one data set, and/or data that are not
>> structured in the traditional data frame format. The design needs more
>> thought than just adding arguments.
>
> We should allow more general things to be passed as data arguments in
> cases where it makes sense.  For example a list with names or an
> environment would be a reasonable way to pass data that doesn't fit into
> a data frame.
>
>> I still prefer a design based a plain likelihood function. Then we can
>> discuss how to construct such a function so that  the data are
>> incorporated in a flexible way.  There are many ways to do this, I've
>> shown one, here's another:
>>
>>> f <- function(lambda) -sum(dpois(x, lambda, log=T))
>>> d <- data.frame(x=rpois(1, 12.34))
>>> environment(f)<-evalq(environment(),d)
>
> We really need to expand as.environment, so that it can convert data
> frames into environments.  You should be able to say:
>
> environment(f) <- as.environment(d)
>
> and get the same result as
>
> environment(f)<-evalq(environment(),d)
>
> But I'd prefer to avoid the necessity for users to manipulate the
> environment of a function.  I think the pattern
>
> model( f, data=d )
>
> being implemented internally as
>
> environment(f) <- as.environment(d, parent = environment(f))
>
> is very nice and general.  It makes things like cross-validation,
> bootstrapping, etc. conceptually cleaner:  keep the same
> formula/function f, but manipulate the data and see what happens.
> It does have problems when d is an environment that already has a
> parent, but I think a reasonable meaning in that case would be to copy
> its contents into a new environment with the new parent set.
>
> Duncan Murdoch
>
>
>>> mle(f, start=list(lambda=10))
>>
>> Call:
>> mle(minuslogl = f, start = list(lambda = 10))
>>
>> Coefficients:
>>  lambda
>> 12.3402
>>
>> It is not at all an unlikely design to have mle() as a generic function
>> which works on many kinds of objects, the default method being
>> function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
>> function returning (tada!) the negative log likelihood function.
>>>   (My version also has a cool formula interface and other
>>> bells and whistles, and I would love to get feedback from other
>>> useRs about it.)
>>>
>>>cheers
>>> Ben Bolker
>>>
>>>
>>
>>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

-- 
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] (PR#10500) Bug#454678: followup

2007-12-07 Thread Dirk Eddelbuettel

On 7 December 2007 at 17:30, [EMAIL PROTECTED] wrote:
| [I was overlooked on the CC. Hopefully this message does not create a
| new bug report.]

[ That was my bad, but I did sent you a forwarded copy a few hours ago when I
noticed this. ]
 
| > Prof Brian Ripley wrote:
| > I would say this was user error (insisting on editing non-existent
| > rownames), although the argument is documented.  You could argue that
| > there are implicit rownames, but they would be 1, 2 ... not row1, row2
| >   And rownames(mat) is NULL.
| >
| > For an interactive function the best solution seems to be to throw an
| > error when the user asks for the impossible.
| >
| > I'll fix it for 2.7.0: it certainly isn't 'important' as it has gone
| > undiscovered for many years, and edit.matrix is itself little used.
| >
| >
| > BTW, 1:dim(names)[1] is dangerous: it could be 1:0.  That was the
| > motivation for seq_len.
| 
| I would agree that it is a rare user error, but my original mistake was
| a little more benign than the one that is depicted in the bug report. I
| just forgot to call rownames()<- before calling edit(); that could have
| happened to anyone. Perhaps one reason why this issue has not been
| reported before is that in 2.5.1 at least, it produces an error message
| rather than crashing (which I noted in the original bug report to Debian
| but that was a little hard to see by the time it got forwarded to
| R-bugs). In some ways, I think throwing an error in 2.7.0 would be
| better than the behavior of edit.data.frame() in this case, which is to
| add row names of 1, 2, ... . -- Thanks, Ben

Having first provided a (rough) patch, and havuing had some more time to
ponder the issue, I have decided to call it a non-bug as far as Debian is
concerned (and you even commented that it belonged more into R's BTS). This
message closes the bug report there.

I am also with Brian on the issue at large -- it is a user error as you do
have to force the TRUE state leading to the segfault.

Anyway, a sufficient amount of time has now been spent with the upshot that
it will behave better in the corner case once 2.7.0 is out.  Sounds good to
me.

Thanks to Brian Ripley for the follow-up on R Core's behalf, and thanks to
Ben to report the, err, 'issue'.

Cheers, Dirk

-- 
Three out of two people have difficulties with fractions.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Peter Dalgaard
Luke Tierney wrote:

 [misc snippage]
>>
>> But I'd prefer to avoid the necessity for users to manipulate the
>> environment of a function.  I think the pattern
>>
>> model( f, data=d )
>
> For working at the general likelihood I think is is better to
> encourage the approach of definign likelihood constructor functions.
> The problem with using f, data is that you need to mathc the names
> used in f and in data, so either you have to explicitly write out f
> with the names you have in data or you have to modify data to use the
> names f likes -- in the running example think
>
> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> d <- data.frame(y=rpois(1, 12.34))
>
> somebody has to connext up the x in f with the y in d. 
[more snippage]

That's not really worse than having to match the names in a model 
formula to the names of the data frame in lm(), is it?

The thing that I'm looking for in these matters is a structure which 
allows us to operate on likelihood functions in a rational way, e.g. 
reparametrize them, join multiple likelihoods with some parameters in 
common, or integrate them. The join operation is illustrative: You can 
easily do 

negljoint <- function(alpha, beta, gamma, delta)
negl1(alpha, beta, gamma) + negl2(beta, gamma, delta)

and with a bit of diligence, this could be the result of Join(negl1, 
negl2). But if the convention is that likelihods have their their data 
as an argument, you also need to also automatically define a data 
argument fot negljoint, (presumably a list of two) and organize that the 
calls to negl1 and negl2 contains the appropriate subdata. It is the 
sort of thing that might be doable, but you'd rather do without.

-pd

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Antonio, Fabio Di Narzo
2007/12/7, Ben Bolker <[EMAIL PROTECTED]>:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Luke Tierney wrote:
> > On Fri, 7 Dec 2007, Duncan Murdoch wrote:
> >
> >
> >
> > For working at the general likelihood I think is is better to
> > encourage the approach of definign likelihood constructor functions.
> > The problem with using f, data is that you need to mathc the names
> > used in f and in data, so either you have to explicitly write out f
> > with the names you have in data or you have to modify data to use the
> > names f likes -- in the running example think
> >
> > f <- function(lambda) -sum(dpois(x, lambda, log=T))
> > d <- data.frame(y=rpois(1, 12.34))
> >
> > somebody has to connext up the x in f with the y in d. With a negative
> > log likelihood constructor defines, for example, as
> >
> > makePoisonNegLogLikelihood <- function(x)
> > function(lambda) -sum(dpois(x, lambda, log=T))
> >
> > this happens naturally with
> >
> > makePoisonNegLogLikelihood(d$y)
> >
> >
>
>   I hate to sound like a jerk, but I do hope that in the end we come
> up with a solution that will still be accessible to people who don't
> quite have the concept of writing functions to produce functions.  I
> feel it is "natural" for people who have multiple data sets to have the
> variables named similarly in different data sets.  All of the
> constructor stuff is still accessible to anyone who wants to use the
> function that way ... is there any way to do a cheesy default
> constructor that is just equivalent to taking the likelihood function
> and arranging for it to be evaluated in an environment containing
> the data?  That way if "nllmaker" below were just a formula
> or a log-likelihood function it could still work ...

I don't really agree with this.
I found really natural writing functions which builds other functions,
for handling
in a clean way the data-dependency problem, much more than
manipilating function environments.
As a useR, I think that if I'm able to write a likelihood function myself:

data <- whatever
negloglik <- function(theta)
  a + very * complicated / function - of %% theta %*% and %o% data

to be used in mle, I'm also good at abstracting it a bit this way:

nllmaker <- function(data)
  function(theta)
a + very * complicated / function - of %% theta %*% and %o% data

negloglik <- nllmaker(whatever),

don't you think? I use this kind of tricks routinely for simulations.
In general, I think it should be more emphatized functional style in R coding.
In fact, I like a lot the recent introduction of some higher order
functions in the base package (Reduce, Filter, Map).

Bests,
Antonio, Fabio.
>
>   [snip]
> > Both (simple) bootstrapping and (simple leave-one-out) crossvalidation
> > require a data structure with a notion of cases, which is much more
> > restrictive than the conext in which mle can be used.  A more ngeneric
> > aproach to bootstrapping that might fit closer to the level of
> > generality of mle might be parameterized in terms of a negative log
> > likelihood constructor, a starting value constructor, and a resampling
> > function, with a single iteration implemented soemthing like
> >
> > mleboot1 <- function(nllmaker, start, esample)  {
> > newdata <- resample()
> > newstart <- do.call(start, newdata)
> > nllfun <- do.call(nllmaker, newdata)
> > mle(fnllfun, start = newstart)
> > }
> >
> > This would leave decisions on the resampling method and data structure
> > up to the user. Somehing similar could be done with K-fold CV.
> >
> > luke
> >
> >
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHWcS1c5UpGjwzenMRAig2AJ9iTzhI1p8tBb7Q15jgT4nA+Zds+gCgggc2
> sI2que28Hl1M5cVGa+anEL0=
> =hCiS
> -END PGP SIGNATURE-
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>


-- 
Antonio, Fabio Di Narzo
Ph.D. student at
Department of Statistical Sciences
University of Bologna, Italy

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested modification to the 'mle' documentation?

2007-12-07 Thread Luke Tierney

On Fri, 7 Dec 2007, Gabor Grothendieck wrote:


On Dec 7, 2007 8:10 AM, Peter Dalgaard <[EMAIL PROTECTED]> wrote:

Ben Bolker wrote:

  At this point I'd just like to advertise the "bbmle" package
(on CRAN) for those who respectfully disagree, as I do, with Peter over
this issue.  I have added a data= argument to my version
of the function that allows other variables to be passed
to the objective function.  It seems to me that this is perfectly
in line with the way that other modeling functions in R
behave.


This is at least cleaner than abusing the "fixed" argument. As you know,
I have reservations, one of which is that it is not a given that I want
it to behave just like other modeling functions, e.g. a likelihood
function might refer to more than one data set, and/or data that are not
structured in the traditional data frame format. The design needs more
thought than just adding arguments.

I still prefer a design based a plain likelihood function. Then we can
discuss how to construct such a function so that  the data are
incorporated in a flexible way.  There are many ways to do this, I've
shown one, here's another:


f <- function(lambda) -sum(dpois(x, lambda, log=T))
d <- data.frame(x=rpois(1, 12.34))
environment(f)<-evalq(environment(),d)
mle(f, start=list(lambda=10))


Call:
mle(minuslogl = f, start = list(lambda = 10))

Coefficients:
 lambda
12.3402



The explicit environment manipulation is what I was referring to but


I make extensive use of lexical scoping in my programming and I NEVER
use explicit environment manipulaiton--for me that is unreadable, and it
is not amenable to checking using things like codetools.  In the
example above all that is needed is to define x directly, e.g.

> f <- function(lambda) -sum(dpois(x, lambda, log=T))
> x <- rpois(1, 12.34)
> mle(f, start=list(lambda=10))

Call:
mle(minuslogl = f, start = list(lambda = 10))

Coefficients:
lambda
12.337

It isn't necessary to go through the data frame or environment
munging.  If you want to be able to work with likelihoods for several
data sets at once then you can either use diferent names for the
variables, like

x1 <- rpois(1, 12.34)
f1 <- function(lambda) -sum(dpois(x1, lambda, log=T))
x2 <- rpois(1, 12.34)
f2 <- function(lambda) -sum(dpois(x2, lambda, log=T))

If you are concerned that x, x1, x2 might have been redefined if you
come back to f1, f2 later (not an issue with typical useage inside a
function but can be an issue at top level) then you can create a
closure that cpatures the particular data set you are using.  The
clean way to do this is with a function that creates the negative log
likelihood, e.g.

makePoisonNegLogLikelihood <- function(x)
function(lambda) -sum(dpois(x, lambda, log=T))

Then you can do

f <- makePoisonNegLogLikelihood(rpois(1, 12.34))
mle(f, start=list(lambda=10))

which I find much cleaner and easier to understand than environment
munging.  Once you are defining a likelihood constructor you can think
about things like making it a bit more efficient by calculating
sufficient statistics once, for example

makePoisonNegLogLikelihood <- function(x) {
sumX <- sum(x)
n <- length(x)
function(lambda) -dpois(sumX, n * lambda, log=T)
}

Best,

luke


we can simplify it using proto.  Create a proto object to hold
f and x then pass the f in the proto object (rather than the
original f) to mle.  That works because proto automatically resets
the environment of f when its added to avoiding the evalq.


set.seed(1)
library(proto)
f <- function(lambda) -sum(dpois(x, lambda, log=TRUE))
p <- proto(f = f, x = rpois(100, 12.34))
mle(p[["f"]], start = list(lambda = 10))


Call:
mle(minuslogl = p[["f"]], start = list(lambda = 10))

Coefficients:
 lambda
12.46000


It is not at all an unlikely design to have mle() as a generic function
which works on many kinds of objects, the default method being
function(object,...) mle(minuslogl(obj)) and minuslogl is an extractor
function returning (tada!) the negative log likelihood function.

  (My version also has a cool formula interface and other
bells and whistles, and I would love to get feedback from other
useRs about it.)

   cheers
Ben Bolker





--

  O__   Peter Dalgaard ?ster Farimagsgade 5, Entr.B
 c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel



__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel



--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 

Re: [Rd] R-Cocoa Bridge

2007-12-07 Thread elw

>>> I had seen old posts on the list (circa 2002) regarding a Cocoa-R 
>>> bridge that was under development, but I can't find anything recent 
>>> about it. Does anyone know if this is available somewhere? If not, 
>>> does anyone have any experience/pointers calling R functions from 
>>> Cocoa?
>> 
>> The R builds on OSX build an R.Framework; you can probably bootstrap 
>> off of that without too much trouble.  [I haven't done much with it; 
>> wish I had time.]
>
> Unfortunately the R/ObjC bridge is not part of the framework yet. We're 
> working on it, but we need some more cleanup of the old code. The 
> current plan is to have it ready for R 2.7.0 (but you never know ...).


Good to know - looking forward to it.

Thanks, Simon!

--elijah

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] os x crash using rpanel and tcltk (PR#10495)

2007-12-07 Thread a . robotham
Here's the back trace i get after it crashes:

Program received signal SIGTRAP, Trace/breakpoint trap.
0x90a61b09 in _objc_error ()
(gdb) bt
#0  0x90a61b09 in _objc_error ()
#1  0x90a61b40 in __objc_error ()
#2  0x90a601a0 in _freedHandler ()
#3  0x93442c64 in -[NSDocument close] ()
#4  0x00015f0b in -[RQuartz close] ()
#5  0x00016c93 in RQuartz_Close ()
#6  0x0039b6dc in removeDevice ()
#7  0x00015f3f in -[RQuartz windowShouldClose:] ()
#8  0x934429b2 in -[NSWindow _document:shouldClose:contextInfo:] ()
#9  0x93442437 in -[NSWindow __close] ()
#10 0x93382dbc in -[NSApplication sendAction:to:from:] ()
#11 0x93382d15 in -[NSControl sendAction:to:] ()
#12 0x93384ec1 in -[NSCell _sendActionFrom:] ()
#13 0x933976a1 in -[NSCell trackMouse:inRect:ofView:untilMouseUp:] ()
#14 0x933b5289 in -[NSButtonCell trackMouse:inRect:ofView:untilMouseUp:] ()
#15 0x933b4b39 in -[NSControl mouseDown:] ()
#16 0x934422f8 in -[_NSThemeWidget mouseDown:] ()
#17 0x933723e3 in -[NSWindow sendEvent:] ()
#18 0x93364384 in -[NSApplication sendEvent:] ()
#19 0x5151 in -[RController handleReadConsole:] ()
#20 0xc641 in Re_ReadConsole ()
#21 0x00015c76 in run_REngineRmainloop ()
#22 0xec4a in -[REngine runREPL] ()
#23 0x226d in main ()

Is this what is wanted?

Thanks for the gdb advice btw. I'm not sure if it's important, but
before it crashed I got lots of warnings similar to the following in
gdb when I attached the tcltk package:

warning: Could not find object file
"/Builds/Rdev-web/QA/Simon/R-build/tiger-ppc/R-2.6-branch/src/extra/blas/bl=
as00.o"
- no debug information available for
"../../../../../R-2.6-branch/src/extra/blas/blas00.c".

Is there an issue with a previous version of R on my machine causing proble=
ms?

Thanks for the help.

Aaron

On 07/12/2007, Aaron Robotham <[EMAIL PROTECTED]> wrote:
> The machine in question is a black MacBook, a pretty standard setup
> with the rest of the details as listed in my first post (R 2.6.1 OSX
> 10.4.11). I'll give the back trace a try and let you know the result.
>
> On 06/12/2007, Simon Urbanek <[EMAIL PROTECTED]> wrote:
> >
> > On Dec 6, 2007, at 9:26 AM, Aaron Robotham wrote:
> >
> > > I know of gdb but I'm not certain how to use it with Mac OS X's
> > > R.app, do you just do something like "gdb open R.app" in the
> > > terminal?.
> > >
> >
> > You can attach it once it's running - just type "attach R" in gdb
> > while R is runningm then "c", then let it crash and then "bt".
> >
> > FWIW: I cannot reproduce the problem and I have tried 3 different
> > machines... (you didn't even tell us what machine type this is ...).
> >
> > Cheers,
> > Simon
> >
> >
> > > Interestingly I don't get this crash when I launch the X11 version of
> > > R through the terminal, so this would suggest the bug in question is
> > > to do with the actual Rgui in R.app. Hopefully this information might
> > > help to narrow down the problem. Any advice for using gdb on R.app
> > > would be appreciated, I couldn't find much in the way of guidance whe=
n
> > > searching online.
> > >
> > > thanks
> > >
> > > Aaron
> > >
> > > On 05/12/2007, Peter Dalgaard <[EMAIL PROTECTED]> wrote:
> > >> [EMAIL PROTECTED] wrote:
> > >>> Hello,
> > >>> I've recently discovered a persistent issue with rpanel when runnin=
g
> > >>> R.app (2.6.1) on Mac OS X 10.4.11. tcltk and rpanel load without an=
y
> > >>> apparent error, and the interactive panels appear to work as
> > >>> expected,
> > >>> however upon closing the panels rpanel has created I get
> > >>> catastrophic
> > >>> errors and R crashes completely. For the most part R manages to
> > >>> crash
> > >>> with dignity and work can be saved, but sometimes it will crash
> > >>> straight out. Below is an example of an entire work session (only
> > >>> base
> > >>> packages loaded) with the crash at the end typical of those
> > >>> encountered:
> > >>>
> > >>>
> >  library(tcltk)
> > 
> > >>> Loading Tcl/Tk interface ... done
> > >>>
> >  library(rpanel)
> > 
> > >>> Package `rpanel', version 1.0-4
> > >>> type help(rpanel) for summary information
> > >>>
> >  density.draw <- function(panel) {
> > 
> > >>> +   plot(density(panel$x, bw =3D panel$h))
> > >>> +   panel
> > >>> + }
> > >>>
> >  panel <- rp.control(x =3D rnorm(50))
> >  rp.slider(panel, h, 0.5, 5, log =3D TRUE, action =3D density.draw)
> > 
> > >>>
> > >>> *** caught bus error ***
> > >>> address 0x0, cause 'non-existent physical address'
> > >>>
> > >>> Possible actions:
> > >>> 1: abort (with core dump, if enabled)
> > >>> 2: normal R exit
> > >>> 3: exit R without saving workspace
> > >>> 4: exit R saving workspace
> > >>>
> > >>> All packages that are required are up to date, and I can find no
> > >>> evidence of similar issues from searching the mailing lists. Any
> > >>> suggestions would be appreciated.
> > >>>
> > >>>
> > >> Can you run this under gdb? A breakpoint in the error handler and a
> > >> backtrace could be valuable.
> > >>
> > >>> Aaron
> > >>>