[Rd] Having rbind dispatch to a different method than data.frame with arguments inheriting from data.frame

2007-03-20 Thread Ken Knoblauch

I have defined an S3 class that inherits from data.frame.  It has
some additional attributes and a particular structure that a modeling
function that I wrote require.   I want to write an rbind
method for this class which will combine the attributes correctly
as well as the data.frame components.  But the help page for
rbind indicates

Note that this algorithm can result in calling the data frame method if
the arguments are all either data frames or vectors

and indeed, the data.frame method is called unless I explicitly
invoke

rbind.mymethod(mm1, mm2)

How can I get dispatch to mymethod rather than data.frame?

I can imagine stripping the data.frame class before the call,
but this seems awkward and I am hoping that there is a better way.

Thank you.



-- 
Ken Knoblauch
Inserm U846
Institut Cellule Souche et Cerveau
Département Neurosciences Intégratives
18 avenue du Doyen Lépine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.pizzerialesgemeaux.com/u846/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] -std=c99 and inline semantics

2007-03-20 Thread Prof Brian Ripley
As even 4.2.0 is not released yet, we will make changes at an appropriate 
time.  The GNU and C99 semantics for 'inline' are known to be 
incompatible.

>From src/include/Rinlinedfuns.h:

/* this header is always to be included from others.
It is only called if COMPILING_R is defined (in util.c) or
from GNU C systems.

There are different conventions for inlining across compilation units.
We pro tem only use the GCC one.  See
http://www.greenend.org.uk/rjk/2003/03/inline.html
*/

and note the 'pro tem'.

On Mon, 19 Mar 2007, Marcus G. Daniels wrote:

> Hi,
>
> I noticed that with the GCC trunk (4.3.0), the semantics of "extern
> inline" have reversed.
> The net result is that R will build without the usual-stdc=gnu99 but it
> won't with it.
> Many multiple definitions result otherwise.
>
> Marcus
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] nclass.scott() and nclass.FD() {Re: [R] truehist bug?}

2007-03-20 Thread Martin Maechler
[This has become entirely a topic for 'R-devel' hence, I'm
 diverting to there keeping R-help once; please follow-up
 only to R-devel ]

> "MM" == Martin Maechler <[EMAIL PROTECTED]>
> on Tue, 20 Mar 2007 08:49:16 +0100 writes:

> "Gad" == Gad Abraham <[EMAIL PROTECTED]>
> on Tue, 20 Mar 2007 17:02:18 +1100 writes:

Gad> Hi,
Gad> Is this a bug in truehist()?

>>> library(MASS)
>>> x <- rep(1, 10)
>>> truehist(x)
Gad> Error in pretty(data, nbins) : invalid 'n' value

MM> You get something similar though slightly more helpful
MM> from
MM>hist(x, "scott")

MM> which then uses the same method for determining the number of bins /
MM> classes for the histogram.

MM> I'd say the main "bug" is in   
MM> nclass.scott()   [ and  also nclass.FD() ]

MM> which do not work when  var(x) == 0  as in this case.
MM> One could argue that  

MM> 1) truehist(x) should not use "scott" as
MM> default when var(x) == 0   {hence a buglet in truehist()}

MM> and either

MM> 2) both hist() and truehist() should produce a better error
MM> message when "scott" (or "FD") is used explicitly and var(x) == 0

MM> or, rather IMO,

MM> 3) nclass.scott(x) and nclass.FD(x) should be extended to return a 
MM> non-negative integer even when  var(x) == 0

after some further thought,
I'm proposing to adopt '3)'  {only; '1)' becomes unneeded}
with the following new code  which is back-compatible for the
case where 'h > 0' and does something reasonable for the case h == 0 :

nclass.scott <- function(x)
{
h <- 3.5 * sqrt(stats::var(x)) * length(x)^(-1/3)
if(h > 0) ceiling(diff(range(x))/h) else 1L
}

nclass.FD <- function(x)
{
h <- stats::IQR(x)
if(h == 0) h <- stats::mad(x, constant = 2) # c=2: consistent with IQR
if (h > 0) ceiling(diff(range(x))/(2 * h * length(x)^(-1/3))) else 1L
}


Martin

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Symbol and Ellipsis problems

2007-03-20 Thread Thomas McCallum
Hi Everyone,

When I have a load of functions which have various arguments passed
via the ellipsis argument it keeps on assigning them as symbol making
them unusable to the function.  My current work around involves using
do.call but this is rather cumbersome.

Does anyone know why it suddenly changes the types to symbol and if
there is a way to get the actual data pointed to by the symbol? (I
have tried eval but that does not work and most functions just treat a
symbol as a string).

( An example which shows the type conversion is given below with the
output - the key is the following "dataX=data" which makes the object
data passed as a symbol and not the actual data).

Many thanks

Tom

EXAMPLE CODE
data=c(1,2,3,4,5,6,7,8,9);

x <- function( ... ) {
args <- list();
extras <- match.call(expand.dots = FALSE)$...;
for( i in names(extras) ) {
args[[ i ]] <- extras[[ i ]];
print(args[[i]]);
print(typeof(extras[[i]]));
}


}

cat("TYPE OF DATA:");
print(typeof(data));
x(dataX=data);

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Symbol and Ellipsis problems

2007-03-20 Thread Duncan Murdoch
On 3/20/2007 6:36 AM, Thomas McCallum wrote:
> Hi Everyone,
> 
> When I have a load of functions which have various arguments passed
> via the ellipsis argument it keeps on assigning them as symbol making
> them unusable to the function.  My current work around involves using
> do.call but this is rather cumbersome.
> 
> Does anyone know why it suddenly changes the types to symbol and if
> there is a way to get the actual data pointed to by the symbol? (I
> have tried eval but that does not work and most functions just treat a
> symbol as a string).
> 
> ( An example which shows the type conversion is given below with the
> output - the key is the following "dataX=data" which makes the object
> data passed as a symbol and not the actual data).

match.call() doesn't evaluate the args, it just shows you the 
unevaluated call.  If you print your "extras" variable in your function, 
you'll see

$dataX
data

because you called the function with dataX=data.  If you'd called it as
x(dataY = 1+2) you'd see

1 + 2
[1] "language"

for the same reason.

If you want to evaluate the ... args, use list(...) instead of match.call.

Duncan Murdoch

> Many thanks
> 
> Tom
> 
> EXAMPLE CODE
> data=c(1,2,3,4,5,6,7,8,9);
> 
> x <- function( ... ) {
> args <- list();
> extras <- match.call(expand.dots = FALSE)$...;
> for( i in names(extras) ) {
> args[[ i ]] <- extras[[ i ]];
> print(args[[i]]);
> print(typeof(extras[[i]]));
> }
> 
> 
> }
> 
> cat("TYPE OF DATA:");
> print(typeof(data));
> x(dataX=data);
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Carriage returns and Sweave output

2007-03-20 Thread Ernest Turro

On 20 Mar 2007, at 07:53, Martin Maechler wrote:

>> "Wolfi" == Wolfgang Huber <[EMAIL PROTECTED]>
>> on Mon, 19 Mar 2007 15:38:00 + writes:
>
>>> the problem with results=hide is that it suppresses everything. I  
>>> just
>>> need Sweave to suppress strings ending in '\r'...
>
>
> Wolfi> Dear Ernest,
>
> Wolfi> IMHO it is good practice to make the printing of these  
> progress reports
> Wolfi> ("sweep 4 of 1024\r") optional and produce them only if  
> the user calls
> Wolfi> your function with, say, "verbose=TRUE",
>
> I strongly agree.
>
> Wolfi> and furthermore set the default value of the
> Wolfi> 'verbose' argument to "verbose=interactive()".
>
> or -- typically my choice -- to  'verbose = getOption("verbose")
>
> Martin Maechler, ETH Zurich
>
> Wolfi> Best wishes
> Wolfi> Wolfgang
>
>  []
>

I agree that users should be free to choose the level of verbosity.  
Here, I want to show the verbose output and print it onto the tex  
file using Sweave to give users a good idea of what happens. What I  
don't want is countless lines being printed because \r is being  
interpreted as \n ...

Thanks,

Ernest






 Ernest Turro wrote:
> Dear all,
> I have a code chunk in my Rnw file that, when executed, outputs
> carriage return characters ('\r') to inform on the progress (e.g.
> "sweep 4 of 1024\r"). But Sweave interprets this as a newline
> character, and therefore I get countless pages of output in my
> vignette where I only really want one line. Any ideas?
> Thanks
> E

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Carriage returns and Sweave output

2007-03-20 Thread Douglas Bates
On 3/20/07, Ernest Turro <[EMAIL PROTECTED]> wrote:
>
> On 20 Mar 2007, at 07:53, Martin Maechler wrote:
>
> >> "Wolfi" == Wolfgang Huber <[EMAIL PROTECTED]>
> >> on Mon, 19 Mar 2007 15:38:00 + writes:
> >
> >>> the problem with results=hide is that it suppresses everything. I
> >>> just
> >>> need Sweave to suppress strings ending in '\r'...
> >
> >
> > Wolfi> Dear Ernest,
> >
> > Wolfi> IMHO it is good practice to make the printing of these
> > progress reports
> > Wolfi> ("sweep 4 of 1024\r") optional and produce them only if
> > the user calls
> > Wolfi> your function with, say, "verbose=TRUE",
> >
> > I strongly agree.
> >
> > Wolfi> and furthermore set the default value of the
> > Wolfi> 'verbose' argument to "verbose=interactive()".
> >
> > or -- typically my choice -- to  'verbose = getOption("verbose")
> >
> > Martin Maechler, ETH Zurich
> >
> > Wolfi> Best wishes
> > Wolfi> Wolfgang
> >
> >  []
> >
>
> I agree that users should be free to choose the level of verbosity.
> Here, I want to show the verbose output and print it onto the tex
> file using Sweave to give users a good idea of what happens. What I
> don't want is countless lines being printed because \r is being
> interpreted as \n ...

In cases like this capture.output() is your friend.  Write one code
chunk with results=hide,echo=FALSE that uses capture.output to trap
the desired output as character strings then use string manipulation
functions to do the desired replacement.  A second code chunk with
eval=FALSE shows the code that apparently produces the output and a
third code chunk with echo=FALSE uses cat on the manipulated character
strings with quote=FALSE to show what apparently was produced.

>  Ernest Turro wrote:
> > Dear all,
> > I have a code chunk in my Rnw file that, when executed, outputs
> > carriage return characters ('\r') to inform on the progress (e.g.
> > "sweep 4 of 1024\r"). But Sweave interprets this as a newline
> > character, and therefore I get countless pages of output in my
> > vignette where I only really want one line. Any ideas?
> > Thanks
> > E
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R 2.5.0 devel try issue in conjuntion with S4 method dispatch

2007-03-20 Thread ml-it-r-devel

Peter Dalgaard wrote:
> Matthias Burger wrote:
>> Hi Seth,
>>   
> 
[...]
>>   
> I have just committed my variation of Seth's patch, so please check the 
> current r-devel too.

For the record:
With R 2.5.0 devel (2007-03-18 r40854)
the issue I was concerned about has been resolved.

Thanks to all of you!

Regards,

  Matthias

> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 


-- 
Matthias Burger Project Manager/ Biostatistician
Epigenomics AGKleine Praesidentenstr. 110178 Berlin, Germany
phone:+49-30-24345-371  fax:+49-30-24345-555
http://www.epigenomics.com   [EMAIL PROTECTED]
--
Epigenomics AG Berlin   Amtsgericht Charlottenburg HRB 75861
Vorstand:   Geert Nygaard (CEO/Vorsitzender),  Dr. Kurt Berlin (CSO)
  Oliver Schacht PhD (CFO),  Christian Piepenbrock (COO)
Aufsichtsrat:   Prof. Dr. Dr. hc. Rolf Krebs (Chairman/Vorsitzender)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread charles . dupont
Full_Name: Charles Dupont
Version: 2.4.1
OS: linux 2.6.18
Submission from: (NULL) (160.129.129.136)


'format.pval' has a major limitation in its implementation. For example
suppose a person had a vector like 'a' and the error being ±0.001.

> a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> format.pval(a, eps=0.01)

If that person wants to have the 'format.pval' output with 2 digits always
showing (like passing nsmall=2 to 'format'). That output would look like 
this.

[1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"

That output is currently impossible because format.pval can only 
produce output like this.

[1] "0.1""0.3""0.4""0.5""0.3""<0.01"


---
a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
format.pval(a, eps=0.01)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread murdoch
On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
> Full_Name: Charles Dupont
> Version: 2.4.1
> OS: linux 2.6.18
> Submission from: (NULL) (160.129.129.136)
> 
> 
> 'format.pval' has a major limitation in its implementation. For example
> suppose a person had a vector like 'a' and the error being ±0.001.
> 
> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> > format.pval(a, eps=0.01)
> 
> If that person wants to have the 'format.pval' output with 2 digits always
> showing (like passing nsmall=2 to 'format'). That output would look like 
> this.
> 
> [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
> 
> That output is currently impossible because format.pval can only 
> produce output like this.
> 
> [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
> 
> 
> ---
> a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> format.pval(a, eps=0.01)

But there's a very easy workaround:

format.pval(c(0.12, a), eps=0.01)[-1]

gives you what you want (because the 0.12 forces two decimal place 
display on all values, and then the [-1] removes it).

Duncan Murdoch

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Gabor Grothendieck
On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
> > Full_Name: Charles Dupont
> > Version: 2.4.1
> > OS: linux 2.6.18
> > Submission from: (NULL) (160.129.129.136)
> >
> >
> > 'format.pval' has a major limitation in its implementation. For example
> > suppose a person had a vector like 'a' and the error being ±0.001.
> >
> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> > > format.pval(a, eps=0.01)
> >
> > If that person wants to have the 'format.pval' output with 2 digits always
> > showing (like passing nsmall=2 to 'format'). That output would look like
> > this.
> >
> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
> >
> > That output is currently impossible because format.pval can only
> > produce output like this.
> >
> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
> >
> >
> > ---
> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> > format.pval(a, eps=0.01)
>
> But there's a very easy workaround:
>
> format.pval(c(0.12, a), eps=0.01)[-1]
>
> gives you what you want (because the 0.12 forces two decimal place
> display on all values, and then the [-1] removes it).
>

Clever, but the problem would be that summary.lm, etc. call format.pval so the
user does not have a chance to do that.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Duncan Murdoch
On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
> On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
>> > Full_Name: Charles Dupont
>> > Version: 2.4.1
>> > OS: linux 2.6.18
>> > Submission from: (NULL) (160.129.129.136)
>> >
>> >
>> > 'format.pval' has a major limitation in its implementation. For example
>> > suppose a person had a vector like 'a' and the error being ±0.001.
>> >
>> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> > > format.pval(a, eps=0.01)
>> >
>> > If that person wants to have the 'format.pval' output with 2 digits always
>> > showing (like passing nsmall=2 to 'format'). That output would look like
>> > this.
>> >
>> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
>> >
>> > That output is currently impossible because format.pval can only
>> > produce output like this.
>> >
>> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
>> >
>> >
>> > ---
>> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> > format.pval(a, eps=0.01)
>>
>> But there's a very easy workaround:
>>
>> format.pval(c(0.12, a), eps=0.01)[-1]
>>
>> gives you what you want (because the 0.12 forces two decimal place
>> display on all values, and then the [-1] removes it).
>>
> 
> Clever, but the problem would be that summary.lm, etc. call format.pval so the
> user does not have a chance to do that.

I don't see how this is relevant.  summary.lm doesn't let you pass a new 
eps value either.  Adding an "nsmall=2" argument to format.pval wouldn't 
help with the display in summary.lm.

I suppose we could track down every use of format.pval in every function 
in every package and add nsmall and eps as arguments to each of them, 
but that's just ridiculous.  People should accept the fact that R 
doesn't produce publication quality text, it just provides you with ways 
to produce that yourself.

Duncan Murdoch

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Gabor Grothendieck
On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
> > On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
> >> > Full_Name: Charles Dupont
> >> > Version: 2.4.1
> >> > OS: linux 2.6.18
> >> > Submission from: (NULL) (160.129.129.136)
> >> >
> >> >
> >> > 'format.pval' has a major limitation in its implementation. For example
> >> > suppose a person had a vector like 'a' and the error being ±0.001.
> >> >
> >> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> > > format.pval(a, eps=0.01)
> >> >
> >> > If that person wants to have the 'format.pval' output with 2 digits 
> >> > always
> >> > showing (like passing nsmall=2 to 'format'). That output would look like
> >> > this.
> >> >
> >> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
> >> >
> >> > That output is currently impossible because format.pval can only
> >> > produce output like this.
> >> >
> >> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
> >> >
> >> >
> >> > ---
> >> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> > format.pval(a, eps=0.01)
> >>
> >> But there's a very easy workaround:
> >>
> >> format.pval(c(0.12, a), eps=0.01)[-1]
> >>
> >> gives you what you want (because the 0.12 forces two decimal place
> >> display on all values, and then the [-1] removes it).
> >>
> >
> > Clever, but the problem would be that summary.lm, etc. call format.pval so 
> > the
> > user does not have a chance to do that.
>
> I don't see how this is relevant.  summary.lm doesn't let you pass a new
> eps value either.  Adding an "nsmall=2" argument to format.pval wouldn't
> help with the display in summary.lm.
>
> I suppose we could track down every use of format.pval in every function
> in every package and add nsmall and eps as arguments to each of them,
> but that's just ridiculous.  People should accept the fact that R
> doesn't produce publication quality text, it just provides you with ways
> to produce that yourself.
>
> Duncan Murdoch
>

You are right in terms of my example which was not applicable but I
think in general that format.pval is used from within other routines rather than
directly by the user so the user may not have a chance to massage it
directly.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Duncan Murdoch
On 3/20/2007 1:40 PM, Gabor Grothendieck wrote:
> On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
>> On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
>> > On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
>> >> > Full_Name: Charles Dupont
>> >> > Version: 2.4.1
>> >> > OS: linux 2.6.18
>> >> > Submission from: (NULL) (160.129.129.136)
>> >> >
>> >> >
>> >> > 'format.pval' has a major limitation in its implementation. For example
>> >> > suppose a person had a vector like 'a' and the error being ±0.001.
>> >> >
>> >> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> >> > > format.pval(a, eps=0.01)
>> >> >
>> >> > If that person wants to have the 'format.pval' output with 2 digits 
>> >> > always
>> >> > showing (like passing nsmall=2 to 'format'). That output would look like
>> >> > this.
>> >> >
>> >> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
>> >> >
>> >> > That output is currently impossible because format.pval can only
>> >> > produce output like this.
>> >> >
>> >> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
>> >> >
>> >> >
>> >> > ---
>> >> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> >> > format.pval(a, eps=0.01)
>> >>
>> >> But there's a very easy workaround:
>> >>
>> >> format.pval(c(0.12, a), eps=0.01)[-1]
>> >>
>> >> gives you what you want (because the 0.12 forces two decimal place
>> >> display on all values, and then the [-1] removes it).
>> >>
>> >
>> > Clever, but the problem would be that summary.lm, etc. call format.pval so 
>> > the
>> > user does not have a chance to do that.
>>
>> I don't see how this is relevant.  summary.lm doesn't let you pass a new
>> eps value either.  Adding an "nsmall=2" argument to format.pval wouldn't
>> help with the display in summary.lm.
>>
>> I suppose we could track down every use of format.pval in every function
>> in every package and add nsmall and eps as arguments to each of them,
>> but that's just ridiculous.  People should accept the fact that R
>> doesn't produce publication quality text, it just provides you with ways
>> to produce that yourself.
>>
>> Duncan Murdoch
>>
> 
> You are right in terms of my example which was not applicable but I
> think in general that format.pval is used from within other routines rather 
> than
> directly by the user so the user may not have a chance to massage it
> directly.

Right, but this means that it is more or less useless to change the 
argument list for format.pvals in the way Charles suggested, because all 
of the existing uses of it would ignore the new parameters.

It would not be so difficult to change the behaviour of format.pvals so 
that for example "digits=2" implied the equivalent of "nsmall=2", but I 
don't think that's a universally desirable change.

The difficulty here is that different people have different tastes for 
presentation-quality text.  Not everyone would agree that the version 
with trailing zeros is preferable to the one without.  R should be 
flexible enough to allow people to customize their displays, but not 
necessarily by having every print method flexible enough to satisfy 
every user:  sometimes users need to construct their own output formats.

Duncan Murdoch

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread hadley wickham
On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 3/20/2007 1:40 PM, Gabor Grothendieck wrote:
> > On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> >> On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
> >> > On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >> >> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
> >> >> > Full_Name: Charles Dupont
> >> >> > Version: 2.4.1
> >> >> > OS: linux 2.6.18
> >> >> > Submission from: (NULL) (160.129.129.136)
> >> >> >
> >> >> >
> >> >> > 'format.pval' has a major limitation in its implementation. For 
> >> >> > example
> >> >> > suppose a person had a vector like 'a' and the error being ±0.001.
> >> >> >
> >> >> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> >> > > format.pval(a, eps=0.01)
> >> >> >
> >> >> > If that person wants to have the 'format.pval' output with 2 digits 
> >> >> > always
> >> >> > showing (like passing nsmall=2 to 'format'). That output would look 
> >> >> > like
> >> >> > this.
> >> >> >
> >> >> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
> >> >> >
> >> >> > That output is currently impossible because format.pval can only
> >> >> > produce output like this.
> >> >> >
> >> >> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
> >> >> >
> >> >> >
> >> >> > ---
> >> >> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> >> > format.pval(a, eps=0.01)
> >> >>
> >> >> But there's a very easy workaround:
> >> >>
> >> >> format.pval(c(0.12, a), eps=0.01)[-1]
> >> >>
> >> >> gives you what you want (because the 0.12 forces two decimal place
> >> >> display on all values, and then the [-1] removes it).
> >> >>
> >> >
> >> > Clever, but the problem would be that summary.lm, etc. call format.pval 
> >> > so the
> >> > user does not have a chance to do that.
> >>
> >> I don't see how this is relevant.  summary.lm doesn't let you pass a new
> >> eps value either.  Adding an "nsmall=2" argument to format.pval wouldn't
> >> help with the display in summary.lm.
> >>
> >> I suppose we could track down every use of format.pval in every function
> >> in every package and add nsmall and eps as arguments to each of them,
> >> but that's just ridiculous.  People should accept the fact that R
> >> doesn't produce publication quality text, it just provides you with ways
> >> to produce that yourself.
> >>
> >> Duncan Murdoch
> >>
> >
> > You are right in terms of my example which was not applicable but I
> > think in general that format.pval is used from within other routines rather 
> > than
> > directly by the user so the user may not have a chance to massage it
> > directly.
>
> Right, but this means that it is more or less useless to change the
> argument list for format.pvals in the way Charles suggested, because all
> of the existing uses of it would ignore the new parameters.
>
> It would not be so difficult to change the behaviour of format.pvals so
> that for example "digits=2" implied the equivalent of "nsmall=2", but I
> don't think that's a universally desirable change.
>
> The difficulty here is that different people have different tastes for
> presentation-quality text.  Not everyone would agree that the version
> with trailing zeros is preferable to the one without.  R should be
> flexible enough to allow people to customize their displays, but not
> necessarily by having every print method flexible enough to satisfy
> every user:  sometimes users need to construct their own output formats.

It would be interesting to take a similar approach to grid - return a
table object which could then be tweaked as necessary, rather than
having to build everything up from scratch.  A very big job though!

Hadley

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] PKG_CFLAGS/CFLAGS and PKG_CXXFLAGS/CXXFLAGS

2007-03-20 Thread Ernest Turro
Why is it that R places CFLAGS after PKG_CFLAGS and not before when  
compiling a package (e.g. through R CMD build pkg)? This can be  
problematic if, for instance, you want to use -O3, but -O2 is in  
R_HOME/etc/Makeconf. If -O2 (in CFLAGS) appears after -O3 (in  
PKG_CFLAGS), you are left with what you didn't want: -O2.

In R-exts, it says that "Flags which are set in file etc/Makeconf can  
be overridden by the environment variable MAKEFLAGS (at least for  
systems using GNU make), as in (Bourne shell syntax)" but this  
doesn't work if I set either MAKEFLAGS or CFLAGS/CXXFLAGS in my  
configure.ac script or package Makevars.

Does anyone have any ideas on how to reliably override the default  
CFLAGS/CXXFLAGS given in Makeconf?

Many thanks,

Ernest

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Carriage returns and Sweave output

2007-03-20 Thread Ernest Turro

On 20 Mar 2007, at 13:24, Douglas Bates wrote:

> [snip]

> In cases like this capture.output() is your friend.  Write one code
> chunk with results=hide,echo=FALSE that uses capture.output to trap
> the desired output as character strings then use string manipulation
> functions to do the desired replacement.  A second code chunk with
> eval=FALSE shows the code that apparently produces the output and a
> third code chunk with echo=FALSE uses cat on the manipulated character
> strings with quote=FALSE to show what apparently was produced.

That is extremely helpful. Thanks very much Douglas.

Ernest


>
>>  Ernest Turro wrote:
>> > Dear all,
>> > I have a code chunk in my Rnw file that, when executed, outputs
>> > carriage return characters ('\r') to inform on the progress  
>> (e.g.
>> > "sweep 4 of 1024\r"). But Sweave interprets this as a newline
>> > character, and therefore I get countless pages of output in my
>> > vignette where I only really want one line. Any ideas?
>> > Thanks
>> > E
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] PKG_CFLAGS/CFLAGS and PKG_CXXFLAGS/CXXFLAGS

2007-03-20 Thread Kasper Daniel Hansen

On Mar 20, 2007, at 7:58 PM, Ernest Turro wrote:

> Why is it that R places CFLAGS after PKG_CFLAGS and not before when
> compiling a package (e.g. through R CMD build pkg)? This can be
> problematic if, for instance, you want to use -O3, but -O2 is in
> R_HOME/etc/Makeconf. If -O2 (in CFLAGS) appears after -O3 (in
> PKG_CFLAGS), you are left with what you didn't want: -O2.
>
> In R-exts, it says that "Flags which are set in file etc/Makeconf can
> be overridden by the environment variable MAKEFLAGS (at least for
> systems using GNU make), as in (Bourne shell syntax)" but this
> doesn't work if I set either MAKEFLAGS or CFLAGS/CXXFLAGS in my
> configure.ac script or package Makevars.

In you example above you want to force the user to use a higher  
optimization flag. But (s)he may have very valid reasons for not  
doing so - and are you really sure that you are comfortable setting - 
O3 on _all_ platforms? Also -O. is GCC specific so it does not work  
for all compilers.

If a user really wants a super fast R (s)he will (should) compile it  
with -O3.

Having said that, I think it is problematic that one cannot   
_downgrade_ the optimization. I am maintaining a package including an  
external library (outside of my control) which does not work with -O2  
on some platforms, due to alignment problems.

> Does anyone have any ideas on how to reliably override the default
> CFLAGS/CXXFLAGS given in Makeconf?

I was given the following code some while ago by Simon Urbanek:

all: $(SHLIB)

MYCXXFLAGS=-O0

%.o: %.cpp
 $(CXX) $(ALL_CPPFLAGS) $(ALL_CXXFLAGS) $(MYCXXFLAGS) -c $< - 
o $@

(this is for C++, I imagine the syntax is straightforward for C). Put  
it in src/Makevars.

But as I said above, I think it is a bad idea to raise the  
optimization level for all users.

Kasper

> Many thanks,
>
> Ernest
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] PKG_CFLAGS/CFLAGS and PKG_CXXFLAGS/CXXFLAGS

2007-03-20 Thread Ernest Turro

On 20 Mar 2007, at 21:32, Kasper Daniel Hansen wrote:

>
> On Mar 20, 2007, at 7:58 PM, Ernest Turro wrote:
>
>> Why is it that R places CFLAGS after PKG_CFLAGS and not before when
>> compiling a package (e.g. through R CMD build pkg)? This can be
>> problematic if, for instance, you want to use -O3, but -O2 is in
>> R_HOME/etc/Makeconf. If -O2 (in CFLAGS) appears after -O3 (in
>> PKG_CFLAGS), you are left with what you didn't want: -O2.
>>
>> In R-exts, it says that "Flags which are set in file etc/Makeconf can
>> be overridden by the environment variable MAKEFLAGS (at least for
>> systems using GNU make), as in (Bourne shell syntax)" but this
>> doesn't work if I set either MAKEFLAGS or CFLAGS/CXXFLAGS in my
>> configure.ac script or package Makevars.
>
> In you example above you want to force the user to use a higher  
> optimization flag. But (s)he may have very valid reasons for not  
> doing so - and are you really sure that you are comfortable setting  
> -O3 on _all_ platforms? Also -O. is GCC specific so it does not  
> work for all compilers.

My configure script checks for GCC before setting -O3 (and -ffast-math).

>
> If a user really wants a super fast R (s)he will (should) compile  
> it with -O3.

I'm compiling a MCMC simulation package. It is very intensive and so - 
O3  should definitely be the default level on systems with GCC.

>
> Having said that, I think it is problematic that one cannot   
> _downgrade_ the optimization. I am maintaining a package including  
> an external library (outside of my control) which does not work  
> with -O2 on some platforms, due to alignment problems.
>
>> Does anyone have any ideas on how to reliably override the default
>> CFLAGS/CXXFLAGS given in Makeconf?
>
> I was given the following code some while ago by Simon Urbanek:
>
> all: $(SHLIB)
>
> MYCXXFLAGS=-O0
>
> %.o: %.cpp
> $(CXX) $(ALL_CPPFLAGS) $(ALL_CXXFLAGS) $(MYCXXFLAGS) -c $< - 
> o $@
>
> (this is for C++, I imagine the syntax is straightforward for C).  
> Put it in src/Makevars.

Thanks very much. That does indeed allow me to place my flags _after_  
the flags from R_HOME/etc/Makeconf. It would be nice, though, if the  
PKG_CXXFLAGS/PKG_CFLAGS were automatically placed _after_ CXXFLAGS/ 
CFLAGS by R... I vaguely recall the Windows version placing them in  
that order.

Cheers,

Ernest


>
> But as I said above, I think it is a bad idea to raise the  
> optimization level for all users.
>
> Kasper
>
>> Many thanks,
>>
>> Ernest
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] cbind() & rbind() for S4 objects -- 'Matrix' package changes

2007-03-20 Thread Martin Maechler
As some of you may have seen / heard in the past,
it is not possible to make cbind() and rbind() into proper S4
generic functions, since their first formal argument is '...'.
[ BTW: S3-methods for these of course only dispatch on the first
  argument which is also not really satisfactory in the context
  of many possible matrix classes.]

For this reason, after quite some discussion on R-core (and
maybe a bit on R-devel) about the options, since R-2.2.0 we have
had S4 generic functions cbind2() and rbind2() (and default methods)
in R's "methods" which are a version of cbind() and
rbind() respectively for two arguments (x,y)
 {and fixed 'deparse.level = 0' : the argument names are 'x' and 'y' and
  hence don't make sense to be used to construct column-names or
  row-names for rbind(), respectively.}

We have been defining methods for cbind2() and rbind2()
for the 'Matrix' classes in late summer 2005 as well.  So far so
good.

In addition, [see also  help(cbind2) ],
we have defined cbind() and rbind() functions which recursively call
cbind2() and rbind2(), more or less following John Chambers
proposal of dealing with such "(...)" argument functions.
These new recursively defined cbind() / rbind() functions
however have typically remained invisible in the methods package
[you can see them via  methods:::cbind  or  methods:::rbind ]
and have been ``activated'' --- replacing  base::cbind / rbind ---
only via an explicit or implicit call to
 methods:::bind_activation(TRUE)

One reason I didn't dare to make them the default was that I
noticed they didn't behave identically to cbind() / rbind() in
all cases, though IIRC the rare difference was only in the dimnames
returned; further, being entirely written in R, and recursive,
they were slower than the mostly C-based fast  cbind() / rbind()
functions.

As some Bioconductor developers have recently found,
these versions of cbind() and rbind() that have been
automagically activated by loading the  Matrix package
can have a detrimental effect in some extreme cases,
e.g. when using
 do.call(cbind, list_of_length_1000)
because of the recursion and the many many calls to the S4
generic, each time searching for method dispatch ...
For the bioconductor applications and potentially for others using cbind() /
rbind() extensively, this can lead to unacceptable performance
loss just because loading 'Matrix' currently calls
 methods:::bind_activation(TRUE)

For this reason, we plan to refrain from doing this activation
on loading of Matrix, but propose to

1)  define and export
cBind <- methods:::cbind
rBind <- methods:::cbind

also do this for R-2.5.0 so that other useRs / packages
can start cBind() / rBind() in their code when they want to
have something that can become properly object-oriented

Possibly --- and this is the big  RFC (request for comments) ---

2) __ for 'Matrix' only __ also
   define and export
cbind <- methods:::cbind
rbind <- methods:::cbind

I currently see the possibilities of doing
 either '1)'
 or '1) and 2)'
 or less likely  '2) alone'

and like to get your feedback on this.

"1)" alone would have the considerable drawback for current
  Matrix useRs that their code / scripts which has been using
  cbind() and rbind() for "Matrix" (and "matrix" and "numeric")
  objects no longer works, but needs to be changed to use
rBind() and cBind()  *instead*

As soon as "2)" is done (in conjunction with "1)" or not),
those who need a very fast but non-OO version of cbind() / rbind()
need to call  base::cbind() or  base::rbind()  explicitly.
This however would not be necessary for packages with a NAMESPACE
since these import 'base' automagically and hence would use
base::cbind() automagically {unless they also import(Matrix)}.

We are quite interested in your feedback!

Martin Maechler and Doug Bates <[EMAIL PROTECTED]>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R-downunder] las with stripchart

2007-03-20 Thread John Maindonald
Hi Ross -
I believe I was wrong in thinking that passing via the ...
list to stripchart() was ever allowed.  Here are patches:

Add ... to the argument list
Add, at the beginning of the function:

pars <- list(...)

There are two calls to axis().  Modify these to:

   axis(1, at = at, labels = names(groups), las=pars$las)
   axis(2, at = at, labels = names(groups), las=pars$las)

Also col=pars$col and bg=pars$bg should probably get
passed in the call to points()

Are there also parameters that should be passed in the
call to title(), as Paul mooted in that July 2001 discussion?

For the earlier 2001 discussion, see

http://tolstoy.newcastle.edu.au/R/devel/01b/0089.html

John Maindonald email: [EMAIL PROTECTED]
phone : +61 2 (6125)3473fax  : +61 2(6125)5549
Centre for Mathematics & Its Applications, Room 1194,
John Dedman Mathematical Sciences Building (Building 27)
Australian National University, Canberra ACT 0200.


On 21 Mar 2007, at 6:50 AM, Ross Ihaka wrote:

> Paul Murrell wrote:
>> Hi
>> John Maindonald wrote:
>>> Hi Paul -
>>> Do you know why las can no longer be passed as a parameter
>>> to stripchart(), though of course it can be set using par.  I note
>>> that is is still available for dotchart().
>> When was 'las' allowed in stripchart()?  (it is missing back as  
>> far as
>> 2.2.1 at least)
>> stripchart() does not allow many par() settings inline at all.  My
>> suspicion is that this is oversight rather than design, but I  
>> don't know
>> who the original author was.
>
> 'twas I. This is almost certainly an oversight.  Patch anyone?
>
>
> -- 
> Ross Ihaka Email:  [EMAIL PROTECTED]
> Department of Statistics   Phone:  (64-9) 373-7599 x 85054
> University of Auckland Fax:(64-9) 373-7018
> Private Bag 92019, Auckland
> New Zealand

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] PKG_CFLAGS/CFLAGS and PKG_CXXFLAGS/CXXFLAGS

2007-03-20 Thread Dirk Eddelbuettel

On 20 March 2007 at 18:58, Ernest Turro wrote:
| Why is it that R places CFLAGS after PKG_CFLAGS and not before when  
| compiling a package (e.g. through R CMD build pkg)? This can be  
| problematic if, for instance, you want to use -O3, but -O2 is in  
| R_HOME/etc/Makeconf. If -O2 (in CFLAGS) appears after -O3 (in  
| PKG_CFLAGS), you are left with what you didn't want: -O2.
| 
| In R-exts, it says that "Flags which are set in file etc/Makeconf can  
| be overridden by the environment variable MAKEFLAGS (at least for  
| systems using GNU make), as in (Bourne shell syntax)" but this  
| doesn't work if I set either MAKEFLAGS or CFLAGS/CXXFLAGS in my  
| configure.ac script or package Makevars.
| 
| Does anyone have any ideas on how to reliably override the default  
| CFLAGS/CXXFLAGS given in Makeconf?

It's one of my token problems too for the Debian package builds. 

Often it is simply not possible to do this easily due to the automated
insertion 'at the wrong' place that you mention.  

One way around is to set CFLAGS (or CXXFLAGS) inside a MAKEFLAGS variable. It
must be properly quoted -- which seems to lead to the restriction that you
get only one token at a time, see below.

I.e. the following was once useful when I needed to tone down the
optimization due to an architecture-specific optimisation bug:

MAKEFLAGS="FFLAGS=-O1" R CMD INSTALL -l $(debRlib) --clean .

An example of the 'one toke only' problem is 

MAKEFLAGS="CXXFLAGS+=-I/some/path/some/header \
CXXFLAGS+=-I/some/other/header \
LDFLAGS+=-L/some/where \
LDFLAGS+=-llibfoo LDFLAGS+=-llibbar"  \
R CMD INSTALL foo

This issue has come up before, and e.g. Kurt has, IIRC, made some suggestions
about overrides below ~/.R.  In my particular case that wouldn't always help
as automated Debian builds are disconnected from individual user accounts.

Dirk

-- 
Hell, there are no rules here - we're trying to accomplish something. 
  -- Thomas A. Edison

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] cbind() & rbind() for S4 objects -- 'Matrix' package changes

2007-03-20 Thread Luke Tierney
On Tue, 20 Mar 2007, Martin Maechler wrote:

> As some of you may have seen / heard in the past,
> it is not possible to make cbind() and rbind() into proper S4
> generic functions, since their first formal argument is '...'.
> [ BTW: S3-methods for these of course only dispatch on the first
>  argument which is also not really satisfactory in the context
>  of many possible matrix classes.]
>
> For this reason, after quite some discussion on R-core (and
> maybe a bit on R-devel) about the options, since R-2.2.0 we have
> had S4 generic functions cbind2() and rbind2() (and default methods)
> in R's "methods" which are a version of cbind() and
> rbind() respectively for two arguments (x,y)
> {and fixed 'deparse.level = 0' : the argument names are 'x' and 'y' and
>  hence don't make sense to be used to construct column-names or
>  row-names for rbind(), respectively.}
>
> We have been defining methods for cbind2() and rbind2()
> for the 'Matrix' classes in late summer 2005 as well.  So far so
> good.
>
> In addition, [see also  help(cbind2) ],
> we have defined cbind() and rbind() functions which recursively call
> cbind2() and rbind2(), more or less following John Chambers
> proposal of dealing with such "(...)" argument functions.
> These new recursively defined cbind() / rbind() functions
> however have typically remained invisible in the methods package
> [you can see them via  methods:::cbind  or  methods:::rbind ]
> and have been ``activated'' --- replacing  base::cbind / rbind ---
> only via an explicit or implicit call to
> methods:::bind_activation(TRUE)
>
> One reason I didn't dare to make them the default was that I
> noticed they didn't behave identically to cbind() / rbind() in
> all cases, though IIRC the rare difference was only in the dimnames
> returned; further, being entirely written in R, and recursive,
> they were slower than the mostly C-based fast  cbind() / rbind()
> functions.
>
> As some Bioconductor developers have recently found,
> these versions of cbind() and rbind() that have been
> automagically activated by loading the  Matrix package
> can have a detrimental effect in some extreme cases,
> e.g. when using
> do.call(cbind, list_of_length_1000)
> because of the recursion and the many many calls to the S4
> generic, each time searching for method dispatch ...
> For the bioconductor applications and potentially for others using cbind() /
> rbind() extensively, this can lead to unacceptable performance
> loss just because loading 'Matrix' currently calls
> methods:::bind_activation(TRUE)

The recursion part is potentially problematic because stack space
limitations will cause this to fail for "relatively" short
list_of_length_1000, but that should be easily curable by rewriting
methods:::cbind and methods:::rbind to use iteration rather than
recursion. This might also help a little with efficiency by avoiding
call overhead.  It would be interesting to know how much of the
performance hit is dispatch overhead and how much closure call
overhead.  If it's dispatch overhead then it may be worth figuring out
some way of handling this with internal dispatch at the C level (at
the cost of maintaining the C level stuff).

My initial reaction to scanning the methods:::cbind code is that it is
doing too much, at least too much R-level work, but I haven't thought
it through carefully.

> For this reason, we plan to refrain from doing this activation
> on loading of Matrix, but propose to
>
> 1)  define and export
>   cBind <- methods:::cbind
>   rBind <- methods:::cbind
>
>also do this for R-2.5.0 so that other useRs / packages
>can start cBind() / rBind() in their code when they want to
>have something that can become properly object-oriented

In mackage methods?

> Possibly --- and this is the big  RFC (request for comments) ---
>
> 2) __ for 'Matrix' only __ also
>   define and export
>   cbind <- methods:::cbind
>   rbind <- methods:::cbind
>
> I currently see the possibilities of doing
> either '1)'
> or '1) and 2)'
> or less likely  '2) alone'
>
> and like to get your feedback on this.
>
> "1)" alone would have the considerable drawback for current
>  Matrix useRs that their code / scripts which has been using
>  cbind() and rbind() for "Matrix" (and "matrix" and "numeric")
>  objects no longer works, but needs to be changed to use
>   rBind() and cBind()  *instead*
>
> As soon as "2)" is done (in conjunction with "1)" or not),
> those who need a very fast but non-OO version of cbind() / rbind()
> need to call  base::cbind() or  base::rbind()  explicitly.
> This however would not be necessary for packages with a NAMESPACE
> since these import 'base' automagically and hence would use
> base::cbind() automagically {unless they also import(Matrix)}.
>
> We are quite interested in your feedback!

Either one seems cleaner to me than having loading of one package
result in mucking about in the internals of another.

If we are t

Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Gabor Grothendieck
On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> On 3/20/2007 1:40 PM, Gabor Grothendieck wrote:
> > On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
> >> On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
> >> > On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >> >> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
> >> >> > Full_Name: Charles Dupont
> >> >> > Version: 2.4.1
> >> >> > OS: linux 2.6.18
> >> >> > Submission from: (NULL) (160.129.129.136)
> >> >> >
> >> >> >
> >> >> > 'format.pval' has a major limitation in its implementation. For 
> >> >> > example
> >> >> > suppose a person had a vector like 'a' and the error being ±0.001.
> >> >> >
> >> >> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> >> > > format.pval(a, eps=0.01)
> >> >> >
> >> >> > If that person wants to have the 'format.pval' output with 2 digits 
> >> >> > always
> >> >> > showing (like passing nsmall=2 to 'format'). That output would look 
> >> >> > like
> >> >> > this.
> >> >> >
> >> >> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
> >> >> >
> >> >> > That output is currently impossible because format.pval can only
> >> >> > produce output like this.
> >> >> >
> >> >> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
> >> >> >
> >> >> >
> >> >> > ---
> >> >> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
> >> >> > format.pval(a, eps=0.01)
> >> >>
> >> >> But there's a very easy workaround:
> >> >>
> >> >> format.pval(c(0.12, a), eps=0.01)[-1]
> >> >>
> >> >> gives you what you want (because the 0.12 forces two decimal place
> >> >> display on all values, and then the [-1] removes it).
> >> >>
> >> >
> >> > Clever, but the problem would be that summary.lm, etc. call format.pval 
> >> > so the
> >> > user does not have a chance to do that.
> >>
> >> I don't see how this is relevant.  summary.lm doesn't let you pass a new
> >> eps value either.  Adding an "nsmall=2" argument to format.pval wouldn't
> >> help with the display in summary.lm.
> >>
> >> I suppose we could track down every use of format.pval in every function
> >> in every package and add nsmall and eps as arguments to each of them,
> >> but that's just ridiculous.  People should accept the fact that R
> >> doesn't produce publication quality text, it just provides you with ways
> >> to produce that yourself.
> >>
> >> Duncan Murdoch
> >>
> >
> > You are right in terms of my example which was not applicable but I
> > think in general that format.pval is used from within other routines rather 
> > than
> > directly by the user so the user may not have a chance to massage it
> > directly.
>
> Right, but this means that it is more or less useless to change the
> argument list for format.pvals in the way Charles suggested, because all
> of the existing uses of it would ignore the new parameters.
>
> It would not be so difficult to change the behaviour of format.pvals so
> that for example "digits=2" implied the equivalent of "nsmall=2", but I
> don't think that's a universally desirable change.
>
> The difficulty here is that different people have different tastes for
> presentation-quality text.  Not everyone would agree that the version
> with trailing zeros is preferable to the one without.  R should be
> flexible enough to allow people to customize their displays, but not
> necessarily by having every print method flexible enough to satisfy
> every user:  sometimes users need to construct their own output formats.
>
> Duncan Murdoch
>

One possibility would be to add args to format.pval whose defaults
can be set through options.  Not beautiful but it would give the user
who really needed it a way to do it.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] wishlist -- Fix for major format.pval limitation (PR#9574)

2007-03-20 Thread Martin Maechler
> "Gabor" == Gabor Grothendieck <[EMAIL PROTECTED]>
> on Tue, 20 Mar 2007 22:10:27 -0400 writes:

Gabor> On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
>> On 3/20/2007 1:40 PM, Gabor Grothendieck wrote:
>> > On 3/20/07, Duncan Murdoch <[EMAIL PROTECTED]> wrote:
>> >> On 3/20/2007 12:44 PM, Gabor Grothendieck wrote:
>> >> > On 3/20/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >> On 3/20/2007 11:19 AM, [EMAIL PROTECTED] wrote:
>> >> >> > Full_Name: Charles Dupont
>> >> >> > Version: 2.4.1
>> >> >> > OS: linux 2.6.18
>> >> >> > Submission from: (NULL) (160.129.129.136)
>> >> >> >
>> >> >> >
>> >> >> > 'format.pval' has a major limitation in its implementation. For 
example
>> >> >> > suppose a person had a vector like 'a' and the error being 
±0.001.
>> >> >> >
>> >> >> > > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> >> >> > > format.pval(a, eps=0.01)
>> >> >> >
>> >> >> > If that person wants to have the 'format.pval' output with 2 
digits always
>> >> >> > showing (like passing nsmall=2 to 'format'). That output would 
look like
>> >> >> > this.
>> >> >> >
>> >> >> > [1] "0.10"   "0.30"   "0.40"   "0.50"   "0.30"   "<0.01"
>> >> >> >
>> >> >> > That output is currently impossible because format.pval can only
>> >> >> > produce output like this.
>> >> >> >
>> >> >> > [1] "0.1""0.3""0.4""0.5""0.3""<0.01"
>> >> >> >
>> >> >> >
>> >> >> > ---
>> >> >> > a <- c(0.1, 0.3, 0.4, 0.5, 0.3, 0.0001)
>> >> >> > format.pval(a, eps=0.01)
>> >> >>
>> >> >> But there's a very easy workaround:
>> >> >>
>> >> >> format.pval(c(0.12, a), eps=0.01)[-1]
>> >> >>
>> >> >> gives you what you want (because the 0.12 forces two decimal place
>> >> >> display on all values, and then the [-1] removes it).
>> >> >>
>> >> >
>> >> > Clever, but the problem would be that summary.lm, etc. call 
format.pval so the
>> >> > user does not have a chance to do that.
>> >>
>> >> I don't see how this is relevant.  summary.lm doesn't let you pass a 
new
>> >> eps value either.  Adding an "nsmall=2" argument to format.pval 
wouldn't
>> >> help with the display in summary.lm.
>> >>
>> >> I suppose we could track down every use of format.pval in every 
function
>> >> in every package and add nsmall and eps as arguments to each of them,
>> >> but that's just ridiculous.  People should accept the fact that R
>> >> doesn't produce publication quality text, it just provides you with 
ways
>> >> to produce that yourself.
>> >>
>> >> Duncan Murdoch
>> >>
>> >
>> > You are right in terms of my example which was not applicable but I
>> > think in general that format.pval is used from within other routines 
rather than
>> > directly by the user so the user may not have a chance to massage it
>> > directly.
>> 
>> Right, but this means that it is more or less useless to change the
>> argument list for format.pvals in the way Charles suggested, because all
>> of the existing uses of it would ignore the new parameters.
>> 
>> It would not be so difficult to change the behaviour of format.pvals so
>> that for example "digits=2" implied the equivalent of "nsmall=2", but I
>> don't think that's a universally desirable change.
>> 
>> The difficulty here is that different people have different tastes for
>> presentation-quality text.  Not everyone would agree that the version
>> with trailing zeros is preferable to the one without.  R should be
>> flexible enough to allow people to customize their displays, but not
>> necessarily by having every print method flexible enough to satisfy
>> every user:  sometimes users need to construct their own output formats.
>> 
>> Duncan Murdoch

Gabor> One possibility would be to add args to format.pval whose defaults
Gabor> can be set through options.  Not beautiful but it would give the user
Gabor> who really needed it a way to do it.

Yes indeed, I had had the same thought (very early in this
thread).  This doesn't mean that I wouldn't agree with Duncan's
statement above anyway.

Whereas I have strong opinion on *not* allowing options() to
influence too many things [it's entirely contrary to the
principle of functional programming], 
options() have always been used to tweak print()ing; so they
could be used here as well.
As original author of format.pval(), I'm happy to accept patches
--- if they are done well and also patch 
src/library/base/man/format.pval.Rd and /man/options.Rd 

Martin

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel