[Rd] C code in packages with complex arguments/returned value

2005-07-01 Thread Robin Hankin
Hi

one of my packages has a severe bottleneck at a particular function  
and I
suspect  that replacing the R code with C code would speed it up.
The function takes complex arguments and returns a complex value.

I would like to follow Best Practice here.  None of the C code in my
packages includes the ability to handle complex numbers (this is done  
by R).

Does anyone know of a package that includes C code which manipulates
and returns  complex arguments that I could take a look at?

Yes, there is src/main/complex.c, but I would like to see how complex  
arithmetic is
done in a package (or how it should be done).

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] C code in packages with complex arguments/returned value

2005-07-01 Thread Robin Hankin
Professor Ripley

On Jul 1, 2005, at 02:58 pm, Prof Brian Ripley wrote:

> Search for uses of Rcomplex in package source code.  The only  
> packages on CRAN which use it are ifs and rimage.
>>


Thank you for this: either is perfect for my purposes.  How did you  
search the
packages' source code?  (my best attempt was downloading a random sample
of packages and searching locally).

RSiteSeearch("Rcomplex") gave me nothing useful.

best wishes


rksh


>> Does anyone know of a package that includes C code which manipulates
>> and returns  complex arguments that I could take a look at?
>>
>>
> Brian D. Ripley,  [EMAIL PROTECTED]
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] prototypes for z_sin() and z_cos()

2005-07-04 Thread Robin Hankin
Hi

I have been looking at complex.c and want to access z_cos() and z_sin 
() from
C in one of my packages.

There doesn't seem to be a corresponding header file: there is no  
complex.h file.

Where are the prototypes of z_sin() and z_cos() for these functions?

grepping didn't help me:

  find ~/downloads/R-2.1.1/  -name "*.h " | xargs  
egrep "z_cos"

returned empty.

How do I access z_sin() from my c code?


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Computer algebra in R - would that be an idea??

2005-07-13 Thread Robin Hankin
Hi guys

another option would be David Gillespie's  "calc", which is written  
in emacs lisp.

This is a stable system with (AFAICS) as large user base.

Unfortunately, it doesn't seem to be actively developed, as the last  
stable version (2.02f)
appears to be 1996.  I don't know if this would be a contraindication.


Robin



On 13 Jul 2005, at 02:36, Gabor Grothendieck wrote:

> I don't know which free system is best.  I have mainly used Yacas
> but my needs to date have been pretty minimal so I suspect
> any of them would have worked.
>
> Eric's COM solution, once I have it figured out, will likely get me
> to the next step on Windows.  I did some googling around and
> found this:
>
> http://www.koders.com/python/ 
> fidDCC1B0FBFABC770277A28835D5FFADC9D25FF54E.aspx
>
> which is a python interface to Yacas which may give some ideas
> on how to interface it to R.
>
>
> On 7/12/05, Søren Højsgaard <[EMAIL PROTECTED]> wrote:
>
>> Personally, I like Maxima better than Yacas, but in both cases the  
>> solution (at least a minimal one) should be doable: A small  
>> program which pipes R commands into a terminal running Maxima/ 
>> Yacas and taking the output back into R. I am not much into the  
>> technical details, but isn't that what can be done with the COM  
>> automatation server on Windows?? (I don't know what the equivalent  
>> would be on unix?).
>> Best regards
>> Søren
>>
>> 
>>
>> Fra: Simon Blomberg [mailto:[EMAIL PROTECTED]
>> Sendt: on 13-07-2005 01:52
>> Til: Duncan Murdoch; Gabor Grothendieck
>> Cc: Søren Højsgaard; r-devel@stat.math.ethz.ch
>> Emne: Re: [Rd] Computer algebra in R - would that be an idea??
>>
>>
>>
>> I would use such a symbolic math package for R. I have dreamt of an
>> open-source solution with functionality similar to mathStatica.
>> http://www.mathstatica.com/ Is yacas the best system to consider?  
>> What
>> about  Maxima http://maxima.sourceforge.net/, which is also GPL,  
>> or maybe
>> Axiom http://savannah.nongnu.org/projects/axiom, which has a  
>> modified BSD
>> license?
>>
>> Cheers,
>>
>> Simon.
>>
>> At 01:25 AM 13/07/2005, Duncan Murdoch wrote:
>>
>>> On 7/12/2005 10:57 AM, Gabor Grothendieck wrote:
>>>
>>>> On 7/12/05, Søren Højsgaard <[EMAIL PROTECTED]> wrote:
>>>>
>>>>>> From time to time people request symbolic computations beyond  
>>>>>> what
>>>>>>
>>> D() and deriv() etc can provide. A brief look at the internet  
>>> shows that
>>> there are many more or less developed computer algebra packages  
>>> freely
>>> available. Therefore, I wondered if it would be an idea to try to
>>> 'integrate' one of these packages in R, which I guess can be done  
>>> in more
>>> or less elegant ways... I do not know any of the computer algebra  
>>> people
>>> around the World, but perhaps some other people from the R- 
>>> community do
>>> and would be able to/interested in establishing such a connection...
>>>
>>>>
>>>>
>>>> Coincidentally I asked the yacas developer about this just  
>>>> yesterday:
>>>>
>>>>
>>> http://sourceforge.net/mailarchive/forum.php? 
>>> thread_id=7711431&forum_id=2216
>>>
>>> It sounds like developing an R package to act as a wrapper would  
>>> be the
>>> best approach.  I didn't see documentation for their API (the  
>>> exports of
>>> their DLL), but I didn't spend long looking.
>>>
>>> Duncan Murdoch
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>
>> Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
>> Centre for Resource and Environmental Studies
>> The Australian National University
>> Canberra ACT 0200
>> Australia
>> T: +61 2 6125 7800 email: Simon.Blomberg_at_anu.edu.au
>> F: +61 2 6125 0757
>> CRICOS Provider # 00120C
>>
>>
>>
>>
>>
>>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Computer algebra in R - would that be an idea??

2005-07-15 Thread Robin Hankin
Just a note to point out that Martin's R implementation of APL's  
"take" function
was pretty neat.   Could we preserve it for posterity somewhere in the R
distribution?  Failing that, it would complement subsums() of the  
magic package
very nicely.





best wishes

Robin

On 15 Jul 2005, at 14:14, Martin Maechler wrote:

>>>>>> "bry" == bry  <[EMAIL PROTECTED]>
>>>>>> on Fri, 15 Jul 2005 14:16:46 +0200 writes:
>>>>>>
>
> bry> About a year ago there was a discussion about interfacing  
> R with J on the J
> bry> forum, the best method seemed to be that outlined in this  
> vector article
> bry> http://www.vector.org.uk/archive/v194/finn194.htm
>
> (which is interesting to see for me,
>  if I had known that my posted functions would make it to an APL
>  workshop...
>  BTW: Does one need special plugins / fonts to properly view
>  the APL symbols ? )
>
>
> bry> and use J instead of APL
>
> bry> http://www.jsoftware.com
>
> well, I've learned about J as the ASCII-variant of APL, and APL
> used to be my first `beloved' computer language (in high school!)
> -- but does J really provide computer algebra in the sense of
> Maxima , Maple or yacas... ??
>
> (and no, please refrain from flame wars about APL vs .. vs ..,
>  it's hard to refrain for me, too...)
>
> Martin Maechler, ETH Zurich
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Computer algebra in R - would that be an idea??

2005-07-18 Thread Robin Hankin
Hi

while everyone is discussing abstract algebra in R,
perhaps it would be good to let the list know about
pari.   From the FAQ

PARI/GP is a widely used computer algebra
system designed for fast computations in
number theory (factorizations, algebraic
number theory, elliptic curves...), but also
contains a large number of other useful
functions to compute with mathematical
entities such as matrices, polynomials,
power series, algebraic numbers, etc., and
a lot of transcendental functions.


My elliptic package has some basic functionality to evaluate
pari/gp statements via system() but I daresay there are better
ways to write a wrapper.

Would anyone on the List be interested in PARI wrapping?


best wishes

Robin




On 16 Jul 2005, at 04:12, simon blomberg wrote:

>>>>>>> "bry" == bry  <[EMAIL PROTECTED]>
>>>>>>>  on Fri, 15 Jul 2005 14:16:46 +0200 writes:
>>>>>>>
>>
>> bry> About a year ago there was a discussion about interfacing R
>> with J on the J
>> bry> forum, the best method seemed to be that outlined in this
>> vector article
>> bry> http://www.vector.org.uk/archive/v194/finn194.htm
>>
>> (which is interesting to see for me,
>>  if I had known that my posted functions would make it to an APL
>>  workshop...
>>  BTW: Does one need special plugins / fonts to properly view
>>  the APL symbols ? )
>>
>>
>> bry> and use J instead of APL
>>
>> bry> http://www.jsoftware.com
>>
>> well, I've learned about J as the ASCII-variant of APL, and APL
>> used to be my first `beloved' computer language (in high school!)
>> -- but does J really provide computer algebra in the sense of
>> Maxima , Maple or yacas... ??
>>
>
> I wonder if at this point it would be useful to think about how a
> symbolic algebra system might be used by R users, and whether that
> would affect the choice of system. For example, Maxima and yacas seem
> to be mostly concerned with "getting the job done", which might be
> all that the data analyst or occasional user needs. However,
> mathematical statisticians might be more concerned with developing
> new mathematics. For example, commutative algebra has been found to
> be very useful in the theory of experimental design (e.g. Pistone,
> Riccomagno, Wynn (2000) Algebraic Statistics: Computational
> Commutative Algebra in Statistics. Chapman & Hall). Now, Maxima can
> already do the necessary calculations (ie Groebner bases of
> polynomials), but as far as I know, yacas cannot. But who knows where
> the next breakthrough will come from? In that case Axiom might be
> more useful and appropriate, as it is largely used by research
> mathematicians. We would then need a mechanism for the development of
> new data structures in R that could potentially match Axiom's rich
> and extensible type system. I guess some mechanism that relies on S4
> classes would be necessary. Of course, there is nothing to stop us
> developing packages for more than one system ("We are R. We will
> assimilate you!"). I have no idea how to do any of this: I'm just
> floating ideas here. :-)
>
> Cheers,
>
> Simon.
>
>
>>
>> (and no, please refrain from flame wars about APL vs .. vs ..,
>>  it's hard to refrain for me, too...)
>>
>> Martin Maechler, ETH Zurich
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>
> -- 
> Simon Blomberg, B.Sc.(Hons.), Ph.D, M.App.Stat.
> Centre for Resource and Environmental Studies
> The Australian National University
> Canberra ACT 0200
> Australia
>
> T: +61 2 6125 7800
> F: +61 2 6125 0757
>
> CRICOS Provider # 00120C
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] octonions

2005-07-29 Thread Robin Hankin
Hi

I  thought it would be fun to develop R functionality for
the octonions (there is  already some work on quaternions).

The octonions are an 8 dimensional algebra over the reals, so an  
octonion
may nicely be represented as a real vector of length 8.  Applications
are many and varied, mostly quantum mechanics.

I would like to develop some R functionality in this area.
My first problem is how to get (eg) a matrix whose entries are  
octonions?
It would  be nice for a %*% b to behave sensibly when a and b are
matrices with octonion elements.

Octonions  are not associative [that is,
x*(y*z) != (x*y)*z ] or commutative [x*y != y*x].
One usually sees an 8-by-8 table that gives the products.

At this stage, I'm polling for ideas on overall structure.
Would a package be the best way forward?  Or
writing an equivalent of complex.c? Or is there
another approach that would be better? The
basic multiplication table is easily implemented in C,
and I now have an (untested) R function OctonionProduct(. , .)
that multiplies two octonions; but I
am struggling to see how to make this play nicely with R: it's not
obvious how to make octonion matrices multiply as they should,
in the same way that (eg) complex matrices do.

Perhaps we could define "*" and "/"  appropriately for vectors of class
"octonion" (if such a thing makes sense):  or is there a better
way?  Also, Mod(), Re(), and perhaps
Conj() would have to be generalized to work with octonions.


comments please!


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] generic function argument list problem

2005-08-31 Thread Robin Hankin
Hi

it says in R-exts that


 A method must have all the arguments of the generic,  
including ... if the generic does.
 A method must have arguments in exactly the same order as the  
generic.
 A method should use the same defaults as the generic.


So, how come the arguments for rep() are (x, times, ...) and the  
arguments
for rep.default() are  (x, times, length.out, each, ...) ?  Shouldn't  
these be the same?


I am writing a rep() method for objects with class "octonion", and
my function rep.octonion() has argument list (x, times, length.out,  
each, ...)
just like rep.default(),   but  R CMD check complains about it, pointing
out that rep() and rep.octonion() have different arguments.

What do I have to do to my rep.octonion() function to make my package
  pass R CMD check without warning?


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] generic function S3 consistency warning advice

2005-09-01 Thread Robin Hankin
Hi

section 6.1 of R-exts suggests that a package  can take over a  
function in the base
package and make it generic.

I want to do this with Re() and have the following lines in my R code:



"Re" <- function(x){UseMethod("Re" )}
"Re.default" <- get("Re" ,pos=NULL,mode="function")
"Re.octonion" <- function(x){give.comp(x,1)}

This, however, generates the following warning from R CMD check:

* checking S3 generic/method consistency ... WARNING
Re:
   function(x)
Re.default:
   function()

See section 'Generic functions and methods' of the 'Writing R  
Extensions'
manual.



I can suppress the warning by commenting out the first line.  Is this a
sensible thing to do?




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] two almost identical packages: best practice

2005-09-09 Thread Robin Hankin
Hi

I have written a whole bunch of methods for objects of class "octonion".

[
an octonion is a single column of an eight-row matrix.  Octonions have
their own multiplication rules and are a generalization of quaternions,
which are columns of a four-row matrix.
]

So far I've done about a dozen generic functions such as seq.octonion(),
rep.octonion(), [<-.octonion(),  and so on and so on.

Very nearly all  of these functions are applicable to objects of  
class "quaternion".
So, for example, I have a generic function Im.octonion():

R> Im.octonion
function (x)
{
 Re(x) <- 0
 return(x)
}

The definition of Im.quaternion() is exactly the same.
Sometimes the return value is an octonion:

  Conj.octonion
function (x)
{
 x <- as.matrix(x)
 x[-1, ] <- -x[-1, ]
 return(as.octonion(x))
}

So the last line of Conj.quaternion() would be "return(as.quaternion 
(x))"
but would be otherwise identical.
A similar story holds for each of maybe twenty generic functions.
Nearly all the Rd files are similarly identical:  the word "octonion"
replaces the word  "octonion".  I suppose "A" changes to "An" as well.

There is a small number of functions and datasets that are specific  
to octonions.

What is Best Practice in this situation?  I don't want to edit two  
separate
packages in tandem.   Is there a mechanism for doing what I want
in the context of a bundle?




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] x[1,], x[1,,], x[1,,,], ...

2005-11-29 Thread Robin Hankin
Hi everyone


apltake(x,1)

[where apltake() is part of library(magic)]

does this.

best wishes

Robin




On 23 Nov 2005, at 10:50, Henrik Bengtsson wrote:

> Hi,
>
> is there a function in R already doing what I try to do below:
>
> # Let 'x' be an array with *any* number of dimensions (>=1).
> x <- array(1:24, dim=c(2,2,3,2))
> ...
> x <- array(1:24, dim=c(4,3,2))
>
> i <- 2:3
>
> ndim <- length(dim(x))
> if (ndim == 1)
>y <- x[i]
> else if (ndim == 2)
>y <- x[i,]
> else if (ndim == 3)
>y <- x[i,,]
> else ...
>
> and so on.  My current solution is
>
> ndim <- length(dim(x))
> args <- rep(",", ndim)
> args[1] <- "i"
> args <- paste(args, collapse="")
> code <- paste("x[", args, "]", sep="")
> expr <- parse(text=code)
> y <- eval(expr)
>
> ndim <- length(dim(x))
> args <- rep(",", ndim)
> args[1] <- "i"
> args <- paste(args, collapse="")
> code <- paste("x[", args, "]", sep="")
> expr <- parse(text=code)
> y <- eval(expr)
>
> Is there another way I can do this in R that I have overlooked?
>
> /Henrik
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] x[1,], x[1,,], x[1,,,], ...

2005-11-29 Thread Robin Hankin
Hi everyone

apltake() is part of magic_1.3-20, which I only uploaded to CRAN
this morning.  Perhaps I should have mentioned this!

[
this version of the magic package also includes a whole slew
of functions that operate on arbitrary dimensioned arrays including
adiag(), apad(), arow(), arot(), arev()


enjoy!
]



On 29 Nov 2005, at 15:17, Gabor Grothendieck wrote:

> I couldn't find it:
>
>> library(magic)
>> apltake
> Error: object "apltake" not found
>
> On 11/29/05, Robin Hankin <[EMAIL PROTECTED]> wrote:
>> Hi everyone
>>
>>
>> apltake(x,1)
>>
>> [where apltake() is part of library(magic)]
>>
>> does this.
>>
>> best wishes
>>
>> Robin
>>
>>
>>
>>
>> On 23 Nov 2005, at 10:50, Henrik Bengtsson wrote:
>>
>>> Hi,
>>>
>>> is there a function in R already doing what I try to do below:
>>>
>>> # Let 'x' be an array with *any* number of dimensions (>=1).
>>> x <- array(1:24, dim=c(2,2,3,2))
>>> ...
>>> x <- array(1:24, dim=c(4,3,2))
>>>
>>> i <- 2:3
>>>
>>> ndim <- length(dim(x))
>>> if (ndim == 1)
>>>y <- x[i]
>>> else if (ndim == 2)
>>>y <- x[i,]
>>> else if (ndim == 3)
>>>y <- x[i,,]
>>> else ...
>>>
>>> and so on.  My current solution is
>>>
>>> ndim <- length(dim(x))
>>> args <- rep(",", ndim)
>>> args[1] <- "i"
>>> args <- paste(args, collapse="")
>>> code <- paste("x[", args, "]", sep="")
>>> expr <- parse(text=code)
>>> y <- eval(expr)
>>>
>>> ndim <- length(dim(x))
>>> args <- rep(",", ndim)
>>> args[1] <- "i"
>>> args <- paste(args, collapse="")
>>> code <- paste("x[", args, "]", sep="")
>>> expr <- parse(text=code)
>>> y <- eval(expr)
>>>
>>> Is there another way I can do this in R that I have overlooked?
>>>
>>> /Henrik
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>> --
>> Robin Hankin
>> Uncertainty Analyst
>> National Oceanography Centre, Southampton
>> European Way, Southampton SO14 3ZH, UK
>>  tel  023-8059-7743
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] eigen()

2006-01-10 Thread Robin Hankin
Hi

I am having difficulty with eigen() on   R-devel_2006-01-05.tar.gz

Specifically,  in R-2.2.0 I get expected behaviour:


 > eigen(matrix(1:100,10,10),FALSE,TRUE)$values
[1]  5.208398e+02+0.00e+00i -1.583980e+01+0.00e+00i
[3] -4.805412e-15+0.00e+00i  1.347691e-15+4.487511e-15i
[5]  1.347691e-15-4.487511e-15i -4.269863e-16+0.00e+00i
[7]  1.364748e-16+0.00e+00i -1.269735e-16+0.00e+00i
[9] -1.878758e-18+5.031259e-17i -1.878758e-18-5.031259e-17i
 >


The same command gives different results in the development version:


 > eigen(matrix(1:100,10,10),FALSE,TRUE)$values
[1]  3.903094e-118 -3.903094e-118 -2.610848e-312 -2.995687e-313  
-2.748516e-313
[6] -1.073138e-314 -1.061000e-314 -1.060998e-314  4.940656e-324
0.00e+00
 > R.version()
Error: attempt to apply non-function
 > R.version
_
platform   powerpc-apple-darwin8.3.0
arch   powerpc
os darwin8.3.0
system powerpc, darwin8.3.0
status Under development (unstable)
major  2
minor  3.0
year   2006
month  01
day04
svn rev36984
language   R
version.string Version 2.3.0 Under development (unstable) (2006-01-04  
r36984)
 >


Note the strange magnitude of the output.

[
I need this to work because one of my packages fails under R-devel
]


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] eigen()

2006-01-10 Thread Robin Hankin


On 10 Jan 2006, at 14:14, Peter Dalgaard wrote:
>


>>> Strange and semi-random results on SuSE 9.3 as well:
>>>
>>>
>>>> eigen(matrix(1:100,10,10))$values
>>>  [1]  5.4e-311+ 0.0e+00i -2.5e-311+3.7e-311i -2.5e-311-3.7e-311i
>>>  [4]  2.5e-312+ 0.0e+00i -2.4e-312+ 0.0e+00i  3.2e-317+ 0.0e+00i
>>>  [7]   0.0e+00+ 0.0e+00i   0.0e+00+ 0.0e+00i   0.0e+00+ 0.0e+00i
>>> [10]   0.0e+00+ 0.0e+00i
>>>
>>
>> Mine is closer to Robin's, but not the same (EL4 x86).
>>
>>> eigen(matrix(1:100,10,10))$values
>>   [1]  5.208398e+02+0.00e+00i -1.583980e+01+0.00e+00i
>>   [3]  6.292457e-16+2.785369e-15i  6.292457e-16-2.785369e-15i
>>   [5] -1.055022e-15+0.00e+00i  3.629676e-16+0.00e+00i
>>   [7]  1.356222e-16+2.682405e-16i  1.356222e-16-2.682405e-16i
>>   [9]  1.029077e-16+0.00e+00i -1.269181e-17+0.00e+00i
>>>
>>
>> But surely, my matrix algebra is a bit rusty, I think this matrix is
>> solveable analytically? Most of the eigenvalues shown are almost
>> exactly zero, except the first two, actually, which is about 521
>> and -16 to the closest integer.
>>
>> I think the difference between mine and Robin's are rounding errors
>> (the matrix is simple enough I expect the solution to be simple  
>> integers
>> or easily expressible analystical expressions, so 8 e-values being  
>> zero
>> is fine). Peter's number seems to be all 10 e-values are zero or one
>> being a huge number! So Peter's is odd... and Peter's machine also
>> seems
>> to be of a different archtecture (64-bit machine)?
>>
>> HTL
>
> Notice that Robin got something completely different in _R-devel_
> which is where I did my check too.  In R 2.2.1 I get the expected two
> non-zero eigenvalues.
>
> I'm not sure whether (and how) you can work out the eigenvalues
> analytically, but since all columns are linear progressions, it is
> at least obvious that the matrix must have column rank two.
>




For everyone's entertainment, here's an example where the analytic  
solution
is known.

fact 1:  the first eigenvalue of a magic square is equal to its constant
fact 2: the sum of the other eigenvalues of a magic square is zero
fact 3: the constant of a magic square of order 10 is 505.

R-2.2.0:

 > library(magic)
 > round(Re(eigen(magic(10),F,T)$values))
[1]  505  170 -170 -105  105   -33000
 >

answers as expected.


R-devel:



 > a <- structure(c(68, 66, 92, 90, 16, 14, 37, 38, 41, 43, 65, 67, 89,
91, 13, 15, 40, 39, 44, 42, 96, 94, 20, 18, 24, 22, 45, 46, 69,
71, 93, 95, 17, 19, 21, 23, 48, 47, 72, 70, 4, 2, 28, 26, 49,
50, 76, 74, 97, 99, 1, 3, 25, 27, 52, 51, 73, 75, 100, 98, 32,
30, 56, 54, 80, 78, 81, 82, 5, 7, 29, 31, 53, 55, 77, 79, 84,
83, 8, 6, 60, 58, 64, 62, 88, 86, 9, 10, 33, 35, 57, 59, 61,
63, 85, 87, 12, 11, 36, 34), .Dim = c(10, 10))

[no magic package!  it fails R CMD check !]

 > round(Re(eigen(magic(10),F,T)$values))
[1] 7.544456e+165  0.00e+00  0.00e+00  0.00e+00  0.00e 
+00
[6]  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e 
+00
 >


not as expected.



--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pcre problems

2019-02-24 Thread robin hankin
Hi there, ubuntu 18.04.2, trying to compile R-devel  3.6.0,  svn 76155.

I am having difficulty compiling R. I think I have pcre installed correctly:

OK~/Downloads/R-devel pcretest -C
PCRE version 8.41 2017-07-05
Compiled with
  8-bit support
  UTF-8 support
  No Unicode properties support
  No just-in-time compiler support
  Newline sequence is LF
  \R matches all Unicode newlines
  Internal link size = 2
  POSIX malloc threshold = 10
  Parentheses nest limit = 250
  Default match limit = 1000
  Default recursion depth limit = 1000
  Match recursion uses stack
OK~/Downloads/R-devel


But ./configure gives me this:

[snip]
checking for pcre.h... yes
checking pcre/pcre.h usability... no
checking pcre/pcre.h presence... no
checking for pcre/pcre.h... no
checking if PCRE version >= 8.20, < 10.0 and has UTF-8 support... no
checking whether PCRE support suffices... configure: error: pcre >=
8.20 library and headers are required
OK~/Downloads/R-devel

can anyone advise?






hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] pcre problems

2019-02-28 Thread robin hankin
thanks for this guys.

I only compiled pcre myself as a last resort,  because of the
./configure failure.  But AFAICS  apt-get reports correct
installation:

OK~/Downloads/R-devel sudo apt-get install r-base-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
r-base-dev is already the newest version (3.5.2-1cosmic).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
OK~/Downloads/R-devel

config.log gives me:

configure:42208: $? = 0
configure:42208: result: yes
configure:42208: checking for pcre.h
configure:42208: result: yes
configure:42208: checking pcre/pcre.h usability
configure:42208: gcc -c  -g -O2 -I/usr/local/include  conftest.c >&5
conftest.c:289:10: fatal error: pcre/pcre.h: No such file or directory
 #include 
  ^
compilation terminated.
configure:42208: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "R"
| #define PACKAGE_TARNAME "R"
| #define PACKAGE_VERSION "3

and

HAVE_UNISTD_H
| # include 
| #endif
| #include 
configure:42208: result: no
configure:42208: checking pcre/pcre.h presence
configure:42208: gcc -E -I/usr/local/include  conftest.c
conftest.c:256:10: fatal error: pcre/pcre.h: No such file or directory
 #include 
  ^
compilation terminated.
configure:42208: $? = 1
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME "R"
| #define PACKAGE_TARNAME "R"
| #define PACKAGE_VERSION "3.6.0"
| #define PACKAGE_STRING "R 3.6.0"
| #define PACKAGE_BUGREPOR



hankin.ro...@gmail.com




On Mon, Feb 25, 2019 at 9:39 PM Tomas Kalibera  wrote:
>
> On 2/25/19 6:25 AM, robin hankin wrote:
> > Hi there, ubuntu 18.04.2, trying to compile R-devel  3.6.0,  svn 76155.
> >
> > I am having difficulty compiling R. I think I have pcre installed correctly:
>
> You can use
>
> apt-get build-dep r-base
>
> to install binary Ubuntu packages needed to build R from source,
> including PCRE, so there should be no need to compile PCRE from source.
> If you need for some special reason to compile PCRE from source, please
> see R Admin Manual, section A.1 on how to configure PCRE. The manual
> also says how to set compilation flags for R to look for headers in
> other directories. Sometimes it helps to search the config.log when
> configure fails. If still in trouble, please report how you built PCRE
> and how you told R where to find it, and the relevant part of
> config.log, to maximize chances people could offer useful advice.
>
> Best,
> Tomas
>
> >
> > OK~/Downloads/R-devel pcretest -C
> > PCRE version 8.41 2017-07-05
> > Compiled with
> >8-bit support
> >UTF-8 support
> >No Unicode properties support
> >No just-in-time compiler support
> >Newline sequence is LF
> >\R matches all Unicode newlines
> >Internal link size = 2
> >POSIX malloc threshold = 10
> >Parentheses nest limit = 250
> >Default match limit = 1000
> >Default recursion depth limit = 1000
> >Match recursion uses stack
> > OK~/Downloads/R-devel
> >
> >
> > But ./configure gives me this:
> >
> > [snip]
> > checking for pcre.h... yes
> > checking pcre/pcre.h usability... no
> > checking pcre/pcre.h presence... no
> > checking for pcre/pcre.h... no
> > checking if PCRE version >= 8.20, < 10.0 and has UTF-8 support... no
> > checking whether PCRE support suffices... configure: error: pcre >=
> > 8.20 library and headers are required
> > OK~/Downloads/R-devel
> >
> > can anyone advise?
> >
> >
> >
> >
> >
> >
> > hankin.ro...@gmail.com
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] pcre problems

2019-03-01 Thread robin hankin
OK thanks Tomas, but I get


OK~ sudo apt-get build-dep r-base
Reading package lists... Done
E: Unable to find a source package for r-base
OK~


hankin.ro...@gmail.com


On Fri, Mar 1, 2019 at 8:47 PM Tomas Kalibera  wrote:
>
> On 3/1/19 7:10 AM, robin hankin wrote:
> > thanks for this guys.
> >
> > I only compiled pcre myself as a last resort,  because of the
> > ./configure failure.  But AFAICS  apt-get reports correct
> > installation:
> >
> > OK~/Downloads/R-devel sudo apt-get install r-base-dev
> > Reading package lists... Done
> > Building dependency tree
> > Reading state information... Done
> > r-base-dev is already the newest version (3.5.2-1cosmic).
> > 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
> > OK~/Downloads/R-devel
>
> I would just run this
>
> apt-get build-dep r-base
>
> that will install all packages needed to _build_ r-base, so including PCRE.
>
> Best
> Tomas
>
> >
> > config.log gives me:
> >
> > configure:42208: $? = 0
> > configure:42208: result: yes
> > configure:42208: checking for pcre.h
> > configure:42208: result: yes
> > configure:42208: checking pcre/pcre.h usability
> > configure:42208: gcc -c  -g -O2 -I/usr/local/include  conftest.c >&5
> > conftest.c:289:10: fatal error: pcre/pcre.h: No such file or directory
> >   #include 
> >^
> > compilation terminated.
> > configure:42208: $? = 1
> > configure: failed program was:
> > | /* confdefs.h */
> > | #define PACKAGE_NAME "R"
> > | #define PACKAGE_TARNAME "R"
> > | #define PACKAGE_VERSION "3
> >
> > and
> >
> > HAVE_UNISTD_H
> > | # include 
> > | #endif
> > | #include 
> > configure:42208: result: no
> > configure:42208: checking pcre/pcre.h presence
> > configure:42208: gcc -E -I/usr/local/include  conftest.c
> > conftest.c:256:10: fatal error: pcre/pcre.h: No such file or directory
> >   #include 
> >^
> > compilation terminated.
> > configure:42208: $? = 1
> > configure: failed program was:
> > | /* confdefs.h */
> > | #define PACKAGE_NAME "R"
> > | #define PACKAGE_TARNAME "R"
> > | #define PACKAGE_VERSION "3.6.0"
> > | #define PACKAGE_STRING "R 3.6.0"
> > | #define PACKAGE_BUGREPOR
> >
> >
> >
> > hankin.ro...@gmail.com
> >
> >
> >
> >
> > On Mon, Feb 25, 2019 at 9:39 PM Tomas Kalibera  
> > wrote:
> >> On 2/25/19 6:25 AM, robin hankin wrote:
> >>> Hi there, ubuntu 18.04.2, trying to compile R-devel  3.6.0,  svn 76155.
> >>>
> >>> I am having difficulty compiling R. I think I have pcre installed 
> >>> correctly:
> >> You can use
> >>
> >> apt-get build-dep r-base
> >>
> >> to install binary Ubuntu packages needed to build R from source,
> >> including PCRE, so there should be no need to compile PCRE from source.
> >> If you need for some special reason to compile PCRE from source, please
> >> see R Admin Manual, section A.1 on how to configure PCRE. The manual
> >> also says how to set compilation flags for R to look for headers in
> >> other directories. Sometimes it helps to search the config.log when
> >> configure fails. If still in trouble, please report how you built PCRE
> >> and how you told R where to find it, and the relevant part of
> >> config.log, to maximize chances people could offer useful advice.
> >>
> >> Best,
> >> Tomas
> >>
> >>> OK~/Downloads/R-devel pcretest -C
> >>> PCRE version 8.41 2017-07-05
> >>> Compiled with
> >>> 8-bit support
> >>> UTF-8 support
> >>> No Unicode properties support
> >>> No just-in-time compiler support
> >>> Newline sequence is LF
> >>> \R matches all Unicode newlines
> >>> Internal link size = 2
> >>> POSIX malloc threshold = 10
> >>> Parentheses nest limit = 250
> >>> Default match limit = 1000
> >>> Default recursion depth limit = 1000
> >>> Match recursion uses stack
> >>> OK~/Downloads/R-devel
> >>>
> >>>
> >>> But ./configure gives me this:
> >>>
> >>> [snip]
> >>> checking for pcre.h... yes
> >>> checking pcre/pcre.h usability... no
> >>> checking pcre/pcre.h presence... no
> >>> checking for pcre/pcre.h... no
> >>> checking if PCRE version >= 8.20, < 10.0 and has UTF-8 support... no
> >>> checking whether PCRE support suffices... configure: error: pcre >=
> >>> 8.20 library and headers are required
> >>> OK~/Downloads/R-devel
> >>>
> >>> can anyone advise?
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> hankin.ro...@gmail.com
> >>>
> >>> __
> >>> R-devel@r-project.org mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-devel
> >>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] pcre problems

2019-03-01 Thread robin hankin
Still something wrong.  I've uncommented the deb-src lines in
sources.list as you suggested (and I thought it couldn't hurt to try
--allow-unauthenticated as well) and:

root@limpet:/etc/apt# apt-get update --allow-unauthenticated
Hit:1 http://nz.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://repo.steampowered.com/steam precise InRelease
Hit:3 http://nz.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:4 http://nz.archive.ubuntu.com/ubuntu bionic-backports InRelease
Ign:5 http://cran.rstudio.com/bin/linux/ubuntu bionic/ InRelease
Err:6 http://cran.rstudio.com/bin/linux/ubuntu bionic/ Release
  404  Not Found [IP: 13.35.146.80 80]
Hit:7 https://cloud.r-project.org/bin/linux/ubuntu cosmic-cran35/
InRelease
Hit:8 http://ppa.launchpad.net/edd/misc/ubuntu bionic InRelease
Hit:9 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:10 http://ppa.launchpad.net/marutter/c2d4u/ubuntu bionic InRelease
Ign:11 http://ppa.launchpad.net/marutter/rdev/ubuntu bionic InRelease
Hit:12 http://ppa.launchpad.net/marutter/rrutter3.5/ubuntu bionic InRelease
Hit:13 http://ppa.launchpad.net/teejee2008/ppa/ubuntu bionic InRelease
Err:14 http://ppa.launchpad.net/marutter/rdev/ubuntu bionic Release
  404  Not Found [IP: 91.189.95.83 80]
Ign:15 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:16 http://dl.google.com/linux/chrome/deb stable Release
Reading package lists... Done
E: The repository 'http://cran.rstudio.com/bin/linux/ubuntu bionic/
Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is
therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user
configuration details.
E: The repository 'http://ppa.launchpad.net/marutter/rdev/ubuntu
bionic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is
therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user
configuration details.
root@limpet:/etc/apt#



hankin.ro...@gmail.com



hankin.ro...@gmail.com




On Fri, Mar 1, 2019 at 9:19 PM Tomas Kalibera  wrote:
>
> On 3/1/19 9:03 AM, robin hankin wrote:
> > OK thanks Tomas, but I get
> >
> >
> > OK~ sudo apt-get build-dep r-base
> > Reading package lists... Done
> > E: Unable to find a source package for r-base
> > OK~
>
> It seems you need to enable source code  repositories on your system
> (and then run apt-get update).
> You can enable them in /etc/apt/sources.list, uncomment all lines
> starting with deb-src.
>
> Best
> Tomas
>
> >
> >
> > hankin.ro...@gmail.com
> >
> >
> > On Fri, Mar 1, 2019 at 8:47 PM Tomas Kalibera  
> > wrote:
> >> On 3/1/19 7:10 AM, robin hankin wrote:
> >>> thanks for this guys.
> >>>
> >>> I only compiled pcre myself as a last resort,  because of the
> >>> ./configure failure.  But AFAICS  apt-get reports correct
> >>> installation:
> >>>
> >>> OK~/Downloads/R-devel sudo apt-get install r-base-dev
> >>> Reading package lists... Done
> >>> Building dependency tree
> >>> Reading state information... Done
> >>> r-base-dev is already the newest version (3.5.2-1cosmic).
> >>> 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
> >>> OK~/Downloads/R-devel
> >> I would just run this
> >>
> >> apt-get build-dep r-base
> >>
> >> that will install all packages needed to _build_ r-base, so including PCRE.
> >>
> >> Best
> >> Tomas
> >>
> >>> config.log gives me:
> >>>
> >>> configure:42208: $? = 0
> >>> configure:42208: result: yes
> >>> configure:42208: checking for pcre.h
> >>> configure:42208: result: yes
> >>> configure:42208: checking pcre/pcre.h usability
> >>> configure:42208: gcc -c  -g -O2 -I/usr/local/include  conftest.c >&5
> >>> conftest.c:289:10: fatal error: pcre/pcre.h: No such file or directory
> >>>#include 
> >>> ^
> >>> compilation terminated.
> >>> configure:42208: $? = 1
> >>> configure: failed program was:
> >>> | /* confdefs.h */
> >>> | #define PACKAGE_NAME "R"
> >>> | #define PACKAGE_TARNAME "R"
> >>> | #define PACKAGE_VERSION "3
> >>>
> >>> and
> >>>
> >>> HAVE_UNISTD_H
> >>> | # include 
> >>> | #endif
> >>> | #include 
> >>> configure:42208: result: no
> >>> configure:42208: checking pcre/pcre.h presence
&

[Rd] openblas

2019-05-07 Thread robin hankin
Hello, macosx 10.13.6, Rdevel  r76458

I'm trying to compile against openblas to reproduce an error on the
CRAN check page (my package is clean under winbuilder and all but one
of the checks).   I've downloaded and installed openblas 0.3.7 but I
am not 100% sure that it is being used by R.

Using

./configure --with-blas="-lopenblas"

Then running R to discover the PID I get:


Rd % lsof -p 17960|egrep -i blas

R   17960 rhankin  txtREG1,8 189224 33471762
/Users/rhankin/Rd/lib/R/lib/libRblas.dylib


But it is not clear to me how to interpret this.  Am I using openblas
as intended?  I suspect not, for I cannot reproduce the error.   Can
anyone advise?


hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] nrow(rbind(character(), character())) returns 2 (as documented but very unintuitive, IMHO)

2019-05-16 Thread robin hankin
Gabriel, you ask an insightful and instructive question. One of R's
great strengths is that we have a forum where this kind of edge-case
can be fruitfully discussed.
My interest in this would be the names of the arguments; in the magic
package I make heavy use of the dimnames of zero-extent arrays.

> rbind(a='x',b='y')
  [,1]
a "x"
b "y"

> rbind(a='x',b=character())
  [,1]
a "x"

> rbind(a=character(),b=character())

a
b

The first and third idiom are fine.  The result of the second one, in
which we rbind() a length-one to  a length-zero vector, is desirable
IMO on the grounds that the content of a two-row matrix cannot be
defined sensibly, so R takes the perfectly reasonable stance of
deciding to ignore the second argument...which carries with it the
implication that the name ('b')  be ignored too.  If the second
argument *could* be recycled, I would want the name, otherwise I
wouldn't.  And this is what R does.

best wishes,


hankin.ro...@gmail.com



hankin.ro...@gmail.com




On Fri, May 17, 2019 at 9:06 AM Hadley Wickham  wrote:
>
> The existing behaviour seems inutitive to me. I would consider these
> invariants for n vector x_i's each with size m:
>
> * nrow(rbind(x_1, x_2, ..., x_n)) equals n
> * ncol(rbind(x_1, x_2, ..., x_n)) equals m
>
> Additionally, wouldn't you expect rbind(x_1[i], x_2[i]) to equal
> rbind(x_1, x_2)[, i, drop = FALSE] ?
>
> Hadley
>
> On Thu, May 16, 2019 at 3:26 PM Gabriel Becker  wrote:
> >
> > Hi all,
> >
> > Apologies if this has been asked before (a quick google didn't  find it for
> > me),and I know this is a case of behaving as documented but its so
> > unintuitive (to me at least) that I figured I'd bring it up here anyway. I
> > figure its probably going to not be changed,  but I'm happy to submit a
> > patch if this is something R-core feels can/should change.
> >
> > So I recently got bitten by the fact that
> >
> > > nrow(rbind(character(), character()))
> >
> > [1] 2
> >
> >
> > I was checking whether the result of an rbind call had more than one row,
> > and that unexpected returned true, causing all sorts of shenanigans
> > downstream as I'm sure you can imagine.
> >
> > Now I know that from ?rbind
> >
> > For ‘cbind’ (‘rbind’), vectors of zero length (including ‘NULL’)
> > >
> > >  are ignored unless the result would have zero rows (columns), for
> > >
> > >  S compatibility.  (Zero-extent matrices do not occur in S3 and are
> > >
> > >  not ignored in R.)
> > >
> >
> > But there's a couple of things here. First, for the rowbind  case this
> > reads as "if there would be zero columns,  the vectors will not be
> > ignored". This wording implies to me that not ignoring the vectors is a
> > remedy to the "problem" of the potential for a zero-column return, but
> > thats not the case.  The result still has 0 columns, it just does not also
> > have zero rows. So even if the behavior is not changed, perhaps this
> > wording can be massaged for clarity?
> >
> > The other issue, which I admit is likely a problem with my intuition, but
> > which I don't think I'm alone in having, is that even if I can't have a 0x0
> > matrix (which is what I'd prefer) I would have expected/preferred a 1x0
> > matrix, the reasoning being that if we must avoid a 0x0 return value, we
> > would do the  minimum required to avoid, which is to not ignore the first
> > length 0 vector, to ensure a non-zero-extent matrix, but then ignore the
> > remaining ones as they contain information for 0 new rows.
> >
> > Of course I can program around this now that I know the behavior, but
> > again, its so unintuitive (even for someone with a fairly well developed
> > intuition for R's sometimes "quirky" behavior) that I figured I'd bring it
> > up.
> >
> > Thoughts?
> >
> > Best,
> > ~G
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
>
>
>
> --
> http://hadley.nz
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Underscores in package names

2019-08-09 Thread robin hankin
Having written the 'lorentz' ,'Davies' and 'schwarzschild' packages,
I'm interested in packages that are named for a particular person.
There are (by my count) 34 packages on CRAN like this, with names that
are the surname of a particular (real) person.  Of these 34,  only 7
are capitalized.

hankin.ro...@gmail.com



hankin.ro...@gmail.com




On Sat, Aug 10, 2019 at 6:50 AM Gabriel Becker  wrote:
>
> On Fri, Aug 9, 2019 at 11:05 AM neonira Arinoem  wrote:
>
> > Won't it be better to have a convention that allows lowercase, dash,
> > underscore and dot as only valid characters for new package names and keep
> > the ancient format validation scheme for older package names?
> >
>
> Validation isn't the only thing we need to do wrt package names. we also
> need to detect them, and particularly,  in at least one case, extract them
> from package tarball filenames (which we also need to be able to
> detect/find).
>
> If we were writing a new language and people wanted to allow snake case in
> package names, sure, but we're talking about about changing how a small but
> package names and package tarballs have always (or at least a very long
> time, I didn't check) had the same form, and it seems expressive enough to
> me? I mean periods are allowed if you feel a strong need for something
> other than a letter.
>
> Note that this proposal would make mypackage_2.3.1 a valid *package name*,
> whose corresponding tarball name might be mypackage_2.3.1_2.3.2 after a
> patch. Yes its a silly example, but why allow that kind of ambiguity?
>
>
>
> For the record @Ben Bolker 
>
> Packages that mix case anywhere in their package name:
>
> > table(grepl("((^[a-z].*[A-Z])|(^[A-Z].*[a-z]))", row.names(a1)))
>
>
> FALSE  TRUE
>
>  8818  5932
>
>
> Packages which start with lower case and have at least one upper
>
> > table(grepl("((^[a-z].*[A-Z]))", row.names(a1)))
>
>
> FALSE  TRUE
>
> 12315  2435
>
>
> Packages which start with uppercase and have at least one lower
>
> > table(grepl("((^[A-Z].*[a-z]))", row.names(a1)))
>
>
> FALSE  TRUE
>
> 11253  3497
>
> Packages which take advantage of the above-mentioned legality of periods
>
> > table(grepl(".", row.names(a1), fixed=TRUE))
>
>
> FALSE  TRUE
>
> 14259   491
>
> Packages with pure lower-case alphabetic names
>
> > table(grepl("^[a-z]+$", row.names(a1)))
>
>
> FALSE  TRUE
>
>  7712  7038
>
>
> Packages with pure upper-case alphabetic names
>
> > table(grepl("^[A-Z]+$", row.names(a1)))
>
>
> FALSE  TRUE
>
> 13636  1114
>
>
> Package with at least one numeric digit in their name
>
> > table(grepl("[0-9]", row.names(a1)))
>
>
> FALSE  TRUE
>
> 14208   542
>
>
> It would be interesting to do an actual analysis of the changes in these
> trends over time, but I Really should be working, so that will have to
> either wait or be done by someone else.
> Best,
> ~G
>
>
>
> > This could be implemented by a single function, taking a strictNaming_b_1
> > parameter which defaults to true. Easy to use, and compliance results will
> > vary according to the parameter value, allowing strict compliance for new
> > package names and lazy compliance for older ones.
> >
> > Doing so allows to enforce a new package name convention while also
> > insuring continuity of compliance for already existing package names.
> >
> > Fabien GELINEAU alias Neonira
> >
> > Le ven. 9 août 2019 à 18:40, Kevin Wright  a écrit :
> >
> > > Please, no.  I'd also like to disallow uppercase letters in package
> > names.
> > > For instance, the cuteness of using a capital "R" in package names is
> > > outweighed by the annoyance of trying to remember which packages use an
> > > upper-case letter.
> > >
> > > On Thu, Aug 8, 2019 at 9:32 AM Jim Hester 
> > > wrote:
> > >
> > > > Are there technical reasons that package names cannot be snake case?
> > > > This seems to be enforced by `.standard_regexps()$valid_package_name`
> > > > which currently returns
> > > >
> > > >"[[:alpha:]][[:alnum:].]*[[:alnum:]]"
> > > >
> > > > Is there any technical reason this couldn't be altered to accept `_`
> > > > as well, e.g.
> > > >
> > > >   "[[:alpha:]][[:alnum:]._]*[[:alnum:]]"
> > > >
> > > > I realize that historically `_` has not always been valid in variable
> > > > names, but this has now been acceptable for 15+ years (since R 1.9.0 I
> > > > believe). Might we also allow underscores for package names?
> > > >
> > > > Jim
> > > >
> > > > __
> > > > R-devel@r-project.org mailing list
> > > > https://stat.ethz.ch/mailman/listinfo/r-devel
> > > >
> > >
> > >
> > > --
> > > Kevin Wright
> > >
> > > [[alternative HTML version deleted]]
> > >
> > > __
> > > R-devel@r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-devel
> > >
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinf

Re: [Rd] specials issue, a heads up

2020-02-24 Thread robin hankin
Terry, speaking as a package author I would say that the package is the
primary unit of organisation of R functionality, and package considerations
should trump R style considerations.  Packages should be self-contained as
far as possible.

Having said that, many of my own packages use---shall we say---distinct
idiom which is easy to misunderstand.  My suggestion would be to document
the misunderstanding. Add the survival::coxph() expression you quote above
to coxph.Rd,  maybe under a \warning{} section, explaining both a
reasonable but wrong, and the correct way, to parse such constructions.

Best wishes

Robin






On Tue, Feb 25, 2020 at 2:56 AM Therneau, Terry M., Ph.D. via R-devel <
r-devel@r-project.org> wrote:

> I recently had a long argument wrt the survival package, namely that the
> following code
> didn't do what they expected, and so they reported it as a bug
>
>survival::coxph( survival::Surv(time, status) ~ age + sex +
> survival::strata(inst),
> data=lung)
>
> a. The Google R style guide  recommends that one put :: everywhere
> b. This breaks the recognition of cluster as a "special" in the terms
> function.
>
> I've been stubborn and said that their misunderstanding of how formulas
> work is not my
> problem.   But I'm sure that the issue will come up again, and multiple
> other packages
> will break.
>
> A big problem is that the code runs, it just gives the wrong answer.
>
> Suggestions?
>
> Terry T.
>
>
> [[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] dput()

2020-02-28 Thread robin hankin
My interpretation of dput.Rd is that dput() gives an exact ASCII form
of the internal representation of an R object.  But:

 rhankin@cuttlefish:~ $ R --version
R version 3.6.2 (2019-12-12) -- "Dark and Stormy Night"
Copyright (C) 2019 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

[snip]

rhankin@cuttlefish:~ $ R --vanilla --quiet
> x <- sum(dbinom(0:20,20,0.35))
> dput(x)
1
> x-1
[1] -4.440892e-16
>
> x==1
[1] FALSE
>

So, dput(x) gives 1, but x is not equal to 1.  Can anyone advise?

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] dput()

2020-02-29 Thread robin hankin
 Thanks guys, I guess I should have referred to FAQ 7.31 (which I am
indeed very familiar with) to avoid misunderstanding.  I have always
used dput() to clarify 7.31-type issues.

The description in ?dput implies [to me at any rate] that there will
be no floating-point roundoff in its output.  I hadn't realised that
'deparsing' as discussed in dput.Rd includes precision roundoff
issues.

I guess the question I should have asked is close to Ben's: "How to
force dput() to return an exact representation of a floating point
number?".  Duncan's reply is the insight I was missing: exact decimal
representation of a double might not be possible (this had not
occurred to me).  Also, Duncan's suggestion of control = c("all",
"hexNumeric") looks good and I will experiment with this.

Best wishes

Robin




On Sun, Mar 1, 2020 at 6:22 AM Duncan Murdoch  wrote:
>
> On 29/02/2020 4:19 a.m., Ben Bolker wrote:
> >
> >   I think Robin knows about FAQ 7.31/floating point (author of
> > 'Brobdingnag', among other numerical packages).  I agree that this is
> > surprising (to me).
> >
> >To reframe this question: is there way to get an *exact* ASCII
> > representation of a numeric value (i.e., guaranteeing the restored value
> > is identical() to the original) ?
> >
> >   .deparseOpts has
> >
> > ‘"digits17"’: Real and finite complex numbers are output using
> >format ‘"%.17g"’ which may give more precision than the
> >default (but the output will depend on the platform and there
> >may be loss of precision when read back).
> >
> >... but this still doesn't guarantee that all precision is kept.
>
> "Using control = c("all", "hexNumeric") comes closest to making
> deparse() an inverse of parse(), as representing double and complex
> numbers as decimals may well not be exact. However, not all objects are
> deparse-able even with this option. A warning will be issued if the
> function recognizes that it is being asked to do the impossible."
>
> >
> >Maybe
> >
> >   saveRDS(x,textConnection("out","w"),ascii=TRUE)
> > identical(x,as.numeric(out[length(out)]))   ## TRUE
> >
> > ?
> >
> >
> >
> >
> > On 2020-02-29 2:42 a.m., Rui Barradas wrote:
> >> Hello,
> >>
> >> FAQ 7.31
> >>
> >> See also this StackOverflow post:
> >>
> >> https://stackoverflow.com/questions/9508518/why-are-these-numbers-not-equal
> >>
> >> Hope this helps,
> >>
> >> Rui Barradas
> >>
> >> Às 00:08 de 29/02/20, robin hankin escreveu:
> >>> My interpretation of dput.Rd is that dput() gives an exact ASCII form
> >>> of the internal representation of an R object.  But:
> >>>
> >>>rhankin@cuttlefish:~ $ R --version
> >>> R version 3.6.2 (2019-12-12) -- "Dark and Stormy Night"
> >>> Copyright (C) 2019 The R Foundation for Statistical Computing
> >>> Platform: x86_64-pc-linux-gnu (64-bit)
> >>>
> >>> [snip]
> >>>
> >>> rhankin@cuttlefish:~ $ R --vanilla --quiet
> >>>> x <- sum(dbinom(0:20,20,0.35))
> >>>> dput(x)
> >>> 1
> >>>> x-1
> >>> [1] -4.440892e-16
> >>>>
> >>>> x==1
> >>> [1] FALSE
> >>>>
> >>>
> >>> So, dput(x) gives 1, but x is not equal to 1.  Can anyone advise?
> >>>
> >>> __
> >>> R-devel@r-project.org mailing list
> >>> https://stat.ethz.ch/mailman/listinfo/r-devel
> >>>
> >>
> >> __
> >> R-devel@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-devel
> >
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> >
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] transpose of complex matrices in R

2010-07-30 Thread Robin Hankin

Hello everybody

When one is working with complex matrices, "transpose"  very nearly 
always means

*Hermitian* transpose, that is, A[i,j] <- Conj(A[j,i]).
One often writes A^* for the Hermitian transpose.

I have only once seen a  "real-life" case
where transposition does not occur simultaneously with complex conjugation.
And I'm not 100% sure that that wasn't a mistake.

Matlab and Octave sort of recognize this, as "A'" means the  Hermitian 
transpose of "A".


In R, this issue makes t(), crossprod(), and tcrossprod() pretty much 
useless to me.


OK, so what to do?  I have several options:

1.  define functions myt(), and mycrossprod() to get round the problem:
myt <- function(x){t(Conj(x))}

2.  Try to redefine t.default():

 t.default <- function(x){if(is.complex(x)){return(base::t(Conj(x)))} 
else {return(base::t(x))}}
(This fails because of infinite recursion, but I don't quite understand 
why).


3.  Try to define a t.complex() function:
t.complex <- function(x){t(Conj(x))}
(also fails because of recursion)

4. Try a kludgy workaround:
  t.complex <- function(x){t(Re(x)) - 1i*t(Im(x))}


Solution 1 is not good because it's easy to forget to use myt() rather 
than t()

and it does not seem to be  good OO practice.

As Martin Maechler points out, solution 2 (even if it worked as desired)
would break the code of everyone who writes a myt() function.

Solution 3 fails and solution 4 is kludgy and inefficient.

Does anyone have any better ideas?




--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] transpose of complex matrices in R

2010-07-30 Thread Robin Hankin

Hello Peter

thanks for this.


On 07/30/2010 11:01 AM, peter dalgaard wrote:


What's wrong with

   

t.complex<- function(x) t.default(Conj(x))
M<- matrix(rnorm(4)+1i*rnorm(4),2)
M
 


It's not going to help with the cross products though.

As a general matter, in my book, transpose is transpose and the other thing is called 
"adjoint". So another option is to use adj(A) for what you call myt(A), and 
then just remember to transcribe A^* to adj(A).

   


That's a good way to think about it.   Perhaps this is one case where 
thinking too literally

in terms of OO-style programming [ie wanting to overload t()] is harmful.

I didn't realize until an email just now that octave has a transpose() 
function which does *not*

take the complex conjugate.   You live and learn!


I forget whether the cross products A^*A and AA^* have any special names in 
abstract linear algebra/functional analysis.

   


Well they sort of do.  I'd call A^*  %*% A an inner product, or possibly 
an Hermitian inner product.


Would it hurt to redefine crossprod(A,B) to mean t(Conj(A)) %*% B   and 
tcrossprod(A,B) to A %*% t(Conj(B))?


(we could include an optional  'Hermitian' argument to crossprod() in 
the complex case, defaulting to TRUE?)



rksh













--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 methods for rbind()

2010-10-26 Thread Robin Hankin
Hello.

I am trying to write an S4 method for rbind(). I have a class of objects
called 'mdm', and I want to be able to rbind() them to one another.

I do not want the method for rbind() to coerce anything to an mdm object.
I want rbind(x1,x2,x1,x2) to work as expected [ie rbind() should take any
number of arguments].

This is what I have so far:


setGeneric(".rbind_pair", function(x,y){standardGeneric(".rbind_pair")})
setMethod(".rbind_pair", c("mdm", "mdm"), function(x,y){.mdm.cPair(x,y)})
setMethod(".rbind_pair", c("mdm", "ANY"),
function(x,y){.mdm_rbind_error(x,y)})
setMethod(".rbind_pair", c("ANY", "mdm"),
function(x,y){.mdm_rbind_error(x,y)})
setMethod(".rbind_pair", c("ANY", "ANY"),
function(x,y){.mdm_rbind_error(x,y)})

".mdm_rbind_error" <- function(x,y){
stop("an mdm object may only be rbinded to another mdm object")
}

".mdm.rbind_pair" <- function(x,y){
stopifnot(compatible(x,y))
mdm(rbind(xold(x),xold(y)),c(types(x),types(y))) # this is the "meat" of
the rbind functionality
}

setGeneric("rbind")
setMethod("rbind", signature="mdm", function(x, ...) {
if(nargs()<3)
.mdm_rbind_pair(x,...)
else
.mdm_rbind_pair(x, Recall(...))
})


But


LE223:~/packages% sudo R CMD INSTALL ./multivator
[snip]
Creating a new generic function for "tail" in "multivator"
Error in conformMethod(signature, mnames, fnames, f, fdef, definition) :
in method for ‘rbind’ with signature ‘deparse.level="mdm"’: formal
arguments (... = "mdm", deparse.level = "mdm") omitted in the method
definition cannot be in the signature
Error : unable to load R code in package 'multivator'
ERROR: lazy loading failed for package ‘multivator’
* removing ‘/usr/local/lib64/R/library/multivator’
* restoring previous ‘/usr/local/lib64/R/library/multivator’
LE223:~/packages%


I can't understand what the error message is trying to say.

Can anyone advise on a fix for this?


-- 
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 methods for rbind()

2010-10-26 Thread Robin Hankin
Thank you very much for this Martin.

It works!

There were two gotchas:

1.  One has to add 'deparse.level=1' to the setMethod() function
argument list

2.  Adding deparse.level=1 increments nargs() ...so the 3 should be 4.

But now it works!

Best wishes

Robin




On 26/10/10 13:49, Martin Morgan wrote:
> On 10/26/2010 03:53 AM, Robin Hankin wrote:
>   
>> Hello.
>>
>> I am trying to write an S4 method for rbind(). I have a class of objects
>> called 'mdm', and I want to be able to rbind() them to one another.
>>
>> I do not want the method for rbind() to coerce anything to an mdm object.
>> I want rbind(x1,x2,x1,x2) to work as expected [ie rbind() should take any
>> number of arguments].
>>
>> This is what I have so far:
>>
>>
>> setGeneric(".rbind_pair", function(x,y){standardGeneric(".rbind_pair")})
>> setMethod(".rbind_pair", c("mdm", "mdm"), function(x,y){.mdm.cPair(x,y)})
>> setMethod(".rbind_pair", c("mdm", "ANY"),
>> function(x,y){.mdm_rbind_error(x,y)})
>> setMethod(".rbind_pair", c("ANY", "mdm"),
>> function(x,y){.mdm_rbind_error(x,y)})
>> setMethod(".rbind_pair", c("ANY", "ANY"),
>> function(x,y){.mdm_rbind_error(x,y)})
>>
>> ".mdm_rbind_error" <- function(x,y){
>> stop("an mdm object may only be rbinded to another mdm object")
>> }
>>
>> ".mdm.rbind_pair" <- function(x,y){
>> stopifnot(compatible(x,y))
>> mdm(rbind(xold(x),xold(y)),c(types(x),types(y))) # this is the "meat" of
>> the rbind functionality
>> }
>>
>> setGeneric("rbind")
>> setMethod("rbind", signature="mdm", function(x, ...) {
>> if(nargs()<3)
>> .mdm_rbind_pair(x,...)
>> else
>> .mdm_rbind_pair(x, Recall(...))
>> })
>>
>>
>> But
>>
>>
>> LE223:~/packages% sudo R CMD INSTALL ./multivator
>> [snip]
>> Creating a new generic function for "tail" in "multivator"
>> Error in conformMethod(signature, mnames, fnames, f, fdef, definition) :
>> in method for ‘rbind’ with signature ‘deparse.level="mdm"’: formal
>> arguments (... = "mdm", deparse.level = "mdm") omitted in the method
>> definition cannot be in the signature
>> 
> Hi Robin
>
> try getGeneric("rbind") and showMethods("rbind") after your setGeneric;.
> The generic is dispatching on 'deparse.level'. 'deparse.level' is
> missing from your method definition, and so can't be used as the
> signature for your method. Try to set the ... explicitly as the
> signature to be used for dispatch.
>
> setGeneric("rbind",
> function(..., deparse.level=1) standardGeneric("rbind"),
> signature = "...")
>
> Martin
>
>   
>> Error : unable to load R code in package 'multivator'
>> ERROR: lazy loading failed for package ‘multivator’
>> * removing ‘/usr/local/lib64/R/library/multivator’
>> * restoring previous ‘/usr/local/lib64/R/library/multivator’
>> LE223:~/packages%
>>
>>
>> I can't understand what the error message is trying to say.
>>
>> Can anyone advise on a fix for this?
>>
>>
>> 
>
>   


-- 
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 package warning

2010-11-10 Thread Robin Hankin
Hello everyone.  R-2.12.0, suse linux 11.3.

I am debugging a package that uses S4 methods
and R CMD check gives the following warning:

> Warning in methods::findMethods(g, env) :
>   non-generic function 'mdm' given to findMethods()
> See the information on DESCRIPTION files in the chapter 'Creating R
> packages' of the 'Writing R Extensions' manual.

I don't see anything obvious in that part of the R-exts but
FWIW, here is my DESCRIPTION file:

> Package: multivator
> Type: Package
> Title: A multivariate emulator
> Version: 1.0-1
> Depends: R(>= 2.10.0), emulator, methods, utils
> Date: 2009-10-27
> Author: Robin K. S. Hankin
> Maintainer:  
> Description: A multivariate generalization of the emulator package
> License: GPL-2
> LazyLoad: yes


I think that the lines in question in my package are:

> setClass("mdm", # "mdm" == "multivariate design matrix"
>  representation = representation(
>xold  = "matrix",
>types = "factor"
>)
>  )
>
>
> setGeneric("mdm",function(xold,types){standardGeneric("mdm")})
> setMethod("mdm",signature("matrix","factor"),function(xold, types){
>   new("mdm", xold=xold, types=types)
> } )

which appear to execute without warning on a virgin console.  In the
package, there are three
or four other S4 classes which are on the same footing as the mdm class,
but do not appear to generate a warning from R CMD check.
The same happens AFAICS on R-2.13, 53543

Can anyone advise on how to deal with the warning?

thank you

Robin




-- 
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] NA printing

2011-01-25 Thread Robin Hankin
Hi.

I'm writing a print method for an object that includes a numeric matrix
for which
the lower diagonal elements are not meaningful.  So I make the lower
diagonal of my matrix NA and print it.

But my co-author does not like NA there and wants a dash.

I have tried coercing the matrix to character, essentially by
M[is.na(M)] <- "-" but this interferes with the pleasing
column alignment for numerical matrices.

How do I make R print "-" instead of "NA"  for NA entries in a numeric
matrix, without altering the vertical alignment?  Is there such a command
as

options(NA_string = "-")

available?

best wishes

Robin Hankin

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] large vignette problem

2011-02-12 Thread robin hankin
Hello

I am trying to get one of my packages to be less than 5Mb in size, and
it is currently
72Mb installed.  It is big because the single vignette includes half a
dozen very large PDF
images.  The PDF files are created as part of the Sweave process.

Using jpg images instead of PDFs is acceptable  in terms of picture
quality (although
not perfect), and results in a very much smaller vignette.

OK, so here’s my first plan and I’m not sure if it’s optimal::

1.  Produce the .jpg files by hand from my own PDF files.
2.  Change the .Rnw file so that it doesn’t produce the PDF files and
the vignette uses the .jpg files instead.
3. ship the package with the .jpg files and the modified .Rnw file.

This is not ideal because it’s not reproducible: only *I* can create
the jpg files from the original .Rnw file, as the new .Rnw file does
not produce the PDF files.

Or can I somehow coerce Sweave into producing jpg files instead of PDF?

Can anyone advise?

thanks

Robin


-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] large vignette problem

2011-02-12 Thread robin hankin
Thanks guys.

It works perfectly

best wishes

rksh

On Sun, Feb 13, 2011 at 2:00 PM, Dario Strbenac
 wrote:
> I usually do :
>
> <>=
> png("xyPlot.png", width = 800, height = 800)
> @
>
> <>=
> ...      ...       ... # Code goes here.
> @
>
> <>=
> null <- dev.off()
> @
>
> \begin{figure}
>    \begin{center}
>        \includegraphics{xyPlot.png}
>    \end{center}
> \end{figure}
>
>  Original message ----
>>Date: Sun, 13 Feb 2011 10:54:37 +1300
>>From: r-devel-boun...@r-project.org (on behalf of robin hankin 
>>)
>>Subject: [Rd] large vignette problem
>>To: r-devel@r-project.org
>>
>>Hello
>>
>>I am trying to get one of my packages to be less than 5Mb in size, and
>>it is currently
>>72Mb installed.  It is big because the single vignette includes half a
>>dozen very large PDF
>>images.  The PDF files are created as part of the Sweave process.
>>
>>Using jpg images instead of PDFs is acceptable  in terms of picture
>>quality (although
>>not perfect), and results in a very much smaller vignette.
>>
>>OK, so here’s my first plan and I’m not sure if it’s optimal::
>>
>>1.  Produce the .jpg files by hand from my own PDF files.
>>2.  Change the .Rnw file so that it doesn’t produce the PDF files and
>>the vignette uses the .jpg files instead.
>>3. ship the package with the .jpg files and the modified .Rnw file.
>>
>>This is not ideal because it’s not reproducible: only *I* can create
>>the jpg files from the original .Rnw file, as the new .Rnw file does
>>not produce the PDF files.
>>
>>Or can I somehow coerce Sweave into producing jpg files instead of PDF?
>>
>>Can anyone advise?
>>
>>thanks
>>
>>Robin
>>
>>
>>--
>>Robin Hankin
>>Uncertainty Analyst
>>hankin.ro...@gmail.com
>>
>>__
>>R-devel@r-project.org mailing list
>>https://stat.ethz.ch/mailman/listinfo/r-devel
>
>
> --
> Dario Strbenac
> Research Assistant
> Cancer Epigenetics
> Garvan Institute of Medical Research
> Darlinghurst NSW 2010
> Australia
>



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 problems

2011-02-15 Thread robin hankin
Hello everybody

[R-2.12.1]

I am having difficulty dealing with Oarray objects.
I have a generic function, call it foo(), and I wish
to define  a method for Oarray objects.

I do not have or want a method for regular arrays [actually,
I want to coerce to an Oarray, and give a warning].

But setMethod() does not behave as desired, giving
an error message when I try to define a method for
Oarray objects.

Also, if I define a method for array objects, this does not
give an error message, but neither does it behave as desired,
as the method is not found when  passing an Oarray object
to foo().


LE110:~/packages% R --vanilla --quiet
> library(Oarray)
> setGeneric("foo",function(x){standardGeneric("foo")})
[1] "foo"
>  setMethod("foo","Oarray",function(x){x})
in method for ‘foo’ with signature ‘"Oarray"’: no definition for class "Oarray"
[1] "foo"
> setMethod("foo","array",function(x){x})
[1] "foo"
> a <- Oarray(0,2:3)
> is.array(a)
[1] TRUE
> foo(a)
Error in function (classes, fdef, mtable)  :
  unable to find an inherited method for function "foo", for signature "Oarray"

Three questions:

Why does the first call to setMethod() give an error message?
Why does (a) not find the method defined for arrays, even though 'a'
is an array?
How can I make "foo(a)" behave as desired when 'a' is an object of
class 'Oarray'?






-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Dependencies problem

2011-02-17 Thread robin hankin
Dear List

I am developing a package which needs another package purely
for one of the datasets it uses.

What is Best Practice for doing this?  Simply including it in
the Dependencies: list in the DESCRIPTION file is giving
me curious errors from R CMD check or R CMD INSTALL:

Error in code2LazyLoadDB(package, lib.loc = lib.loc, keep.source =
keep.source,  :
  name space must not be loaded.
ERROR: lazy loading failed for package ‘MM’

This error seems to be system-dependent.  All of the depencies are
packages which
are on CRAN and AFAICS  pass R CMD check.

Can anyone advise?


-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Sexpr problem

2011-02-25 Thread robin hankin
Hi.

I am having difficulty  making \Sexpr work as desired.
 Specifically, the pdf and the text  versions of the help system
differ, and I can't reproduce the example on page 63 of
the R Journal article by  Murdoch and Urbanek (Vol 1/2,
December 2009).

Is there an example package that I could examine for Best Practice?

thanks



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] vignette typesetting issue

2011-03-01 Thread robin hankin
Hi

I am preparing a vignette, and I am finding that LaTeX ties (that is,
the tilde symbol, "~", used to tell LaTeX not to make a newline
in the output), are appearing as actual tildes.  This is not desired behaviour
for me.  Thus the PDF includes things like this:


". . . taken directly from~Oakley (1999)".

[this typeset as "taken directly from~\cite{oakley1999}..."].

I do not want the tilde to appear in the PDF file.
I do not want to remove the tilde symbols because then latex would
be free to make a line break between "from" and "Oakley", which
is poor form.

The issue does not arise on my Mac, and I expect that it is down
to some latex setting or style file.

Does anyone recognize this problem?

Can anyone advise?



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Using GSL Routines

2011-04-13 Thread robin hankin
Hi

the gsl R package includes many wrappers which you may find useful,
but if you write new ones please let me know and I'll include them
in a future release.

best wishes

Robin

On Thu, Apr 14, 2011 at 5:49 PM, Mohit Dayal  wrote:
> Dear R-programmers,
>
> I am trying out certain methods in R, and the statistics require me to
> calculate n-(sample size) dimensional equations. They are not really very
> hard to solve - my home-brew implentation of Newton-Raphson in R succeeds
> most of time with simulated data. (Note that I am assured of a unique
> solution by theory). Problem comes in with real data, for which I should
> really implement a good line search (convergence issues). Being lazy, i
> would like to link to the GSL routines which are of course faster and more
> reliable.
>
> My question is should i use the C - GSL routines or the Python ones in
> NumPy? My major concern is the portability of the code i write: really dont
> want users to have to install a bunch of software just to use my package.
> (Im looking at Windows here)
>
> Alternatively, should i just hack out the code (fsolve) and put it in my
> package?
>
> Thanks for the advice,
> Mohit Dayal
> Applied Statistics & Computing Lab
> ISB
>
>        [[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] NAMESPACE problems

2011-08-02 Thread robin hankin
Hi.

I am having difficulty following section 1.6.6 of the R-extensions manual.

I am trying to update the Brobdingnag package to include a NAMESPACE file (the
untb package requires the Brobdingnag package).

Without the NAMESPACE file, the package passes R CMD check cleanly.

However, if I include a NAMESPACE file, even an empty one, R CMD check
gives the following error in 00install.out:



wt118:~/packages% cat Brobdingnag.Rcheck/00install.out
* installing *source* package ‘Brobdingnag’ ...
** R
** inst
** preparing package for lazy loading
Creating a generic for ‘max’ in package ‘Brobdingnag’
(the supplied definition differs from and overrides the implicit generic
in package ‘base’: Classes: "nonstandardGenericFunction", "standardGeneric")
Creating a generic for ‘min’ in package ‘Brobdingnag’
(the supplied definition differs from and overrides the implicit generic
in package ‘base’: Classes: "nonstandardGenericFunction", "standardGeneric")
Creating a generic for ‘range’ in package ‘Brobdingnag’
(the supplied definition differs from and overrides the implicit generic
in package ‘base’: Classes: "nonstandardGenericFunction", "standardGeneric")
Creating a generic for ‘prod’ in package ‘Brobdingnag’
(the supplied definition differs from and overrides the implicit generic
in package ‘base’: Classes: "nonstandardGenericFunction", "standardGeneric")
Creating a generic for ‘sum’ in package ‘Brobdingnag’
(the supplied definition differs from and overrides the implicit generic
in package ‘base’: Classes: "nonstandardGenericFunction", "standardGeneric")
Error in setGeneric(f, where = where) :
  must supply a function skeleton, explicitly or via an existing function
Error : unable to load R code in package 'Brobdingnag'
ERROR: lazy loading failed for package ‘Brobdingnag’
* removing ‘/Users/rksh/packages/Brobdingnag.Rcheck/Brobdingnag’
wt118:~/packages%


AFAICS, all the setGeneric() calls are pretty much like this:

setGeneric("getX",function(x){standardGeneric("getX")})



Can anyone advise?


thank you

Robin


-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] repeatable segfault

2011-09-05 Thread robin hankin
Hi.  macosx 10.6.8

With R-2.13.1 and also revision 56948 I get the following repeatable segfault:



wt118:~% R --vanilla --quiet
> R.Version()
$platform
[1] "x86_64-apple-darwin9.8.0"

$arch
[1] "x86_64"

$os
[1] "darwin9.8.0"

$system
[1] "x86_64, darwin9.8.0"

$status
[1] ""

$major
[1] "2"

$minor
[1] "13.1"

$year
[1] "2011"

$month
[1] "07"

$day
[1] "08"

$`svn rev`
[1] "56322"

$language
[1] "R"

$version.string
[1] "R version 2.13.1 (2011-07-08)"

> eigen(crossprod(matrix(1:2000, 50)) + (0+0i), T, T)

 *** caught segfault ***
address 0x1038000a8, cause 'memory not mapped'

Traceback:
 1: .Call("La_rs_cmplx", x, only.values, PACKAGE = "base")
 2: eigen(crossprod(matrix(1:2000, 50)) + (0 + (0+0i)), T, T)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection: 2
wt118:~%





-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] array extraction

2011-09-27 Thread robin hankin
hello everyone.

Look at the following R idiom:

 a <- array(1:30,c(3,5,2))
 M <- (matrix(1:15,c(3,5)) %% 4) < 2
 a[M,] <- 0

Now, I think that "a[M,]" has an unambiguous meaning (to a human).
However, the last line doesn't work as desired, but I expected it
to...and it recently took me an indecent amount of time to debug an
analogous case.  Just to be explicit, I would expect a[M,] to extract
a[i,j,] where M[i,j] is TRUE.  (Extract.Rd is perfectly clear here, and R is
behaving as documented).

The best I could cobble together was the following:

 ind <- which(M,arr.ind=TRUE)
 n <- 3
 ind <- cbind(kronecker(ind,rep(1,dim(a)[n])),rep(seq_len(dim(a)[n]),nrow(ind)))
 a[ind] <- 0


but the intent is hardly clear, certainly compared to "a[M,]"

I've been pondering how to implement such indexing, and its
generalization.

Suppose 'a' is a seven-dimensional array, and M1 a matrix and M2 a
three-dimensional array (both Boolean).  Then "a[,M1,,M2]" is a
natural generalization of the above.  I would want a[,M1,,M2] to
extract a[i1,i2,i3,i4,i5,i6,i7] where M1[i2,i3] and M[i5,i6,i7] are
TRUE.

One would need all(dim(a)[2:3] == dim(M1)) and all(dim(a)[5:7] ==
dim(M2)) for consistency.

Can any R-devel subscribers advise?




-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] array extraction

2011-09-27 Thread robin hankin
thank you Simon.

I find a[M] working to be unexpected, but consistent with (a close
reading of) Extract.Rd

Can we reproduce a[,M]?

[I would expect this to extract a[,j,k] where M[j,k] is TRUE]

try this:


> a <- array(1:30,c(3,5,2))
> M <- matrix(1:10,5,2) %% 3==1
> a[M]
 [1]  1  4  7 10 11 14 17 20 21 24 27 30

This is not doing what I would want a[,M] to do.




I'll checkout afill() right now

best wishes


Robin


On Wed, Sep 28, 2011 at 10:39 AM, Simon Knapp  wrote:
> a[M] gives the same as your `cobbled together' code.
>
> On Wed, Sep 28, 2011 at 6:35 AM, robin hankin 
> wrote:
>>
>> hello everyone.
>>
>> Look at the following R idiom:
>>
>>  a <- array(1:30,c(3,5,2))
>>  M <- (matrix(1:15,c(3,5)) %% 4) < 2
>>  a[M,] <- 0
>>
>> Now, I think that "a[M,]" has an unambiguous meaning (to a human).
>> However, the last line doesn't work as desired, but I expected it
>> to...and it recently took me an indecent amount of time to debug an
>> analogous case.  Just to be explicit, I would expect a[M,] to extract
>> a[i,j,] where M[i,j] is TRUE.  (Extract.Rd is perfectly clear here, and R
>> is
>> behaving as documented).
>>
>> The best I could cobble together was the following:
>>
>>  ind <- which(M,arr.ind=TRUE)
>>  n <- 3
>>  ind <-
>> cbind(kronecker(ind,rep(1,dim(a)[n])),rep(seq_len(dim(a)[n]),nrow(ind)))
>>  a[ind] <- 0
>>
>>
>> but the intent is hardly clear, certainly compared to "a[M,]"
>>
>> I've been pondering how to implement such indexing, and its
>> generalization.
>>
>> Suppose 'a' is a seven-dimensional array, and M1 a matrix and M2 a
>> three-dimensional array (both Boolean).  Then "a[,M1,,M2]" is a
>> natural generalization of the above.  I would want a[,M1,,M2] to
>> extract a[i1,i2,i3,i4,i5,i6,i7] where M1[i2,i3] and M[i5,i6,i7] are
>> TRUE.
>>
>> One would need all(dim(a)[2:3] == dim(M1)) and all(dim(a)[5:7] ==
>> dim(M2)) for consistency.
>>
>> Can any R-devel subscribers advise?
>>
>>
>>
>>
>> --
>> Robin Hankin
>> Uncertainty Analyst
>> hankin.ro...@gmail.com
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R (development) changes in arith, logic, relop with (0-extent) arrays

2016-09-07 Thread robin hankin
quot;)))
>
> > m1 + 1:2  # ->  2:3  but now with warning to  "become ERROR"
> > tools::assertError(m1 & 1:2)# ERR: dims [product 1] do not match the
> length of object [2]
> > tools::assertError(m1 < 1:2)# ERR:  (ditto)
> > ##
> > ## non-0-length arrays combined with {NULL or double() or ...} *fail*
>
> > ### Length-1 arrays:  Arithmetic with |vectors| > 1  treated array
> as scalar
> > m1 + NULL # gave  numeric(0) in R <= 3.3.x --- still, *but* w/
> warning to "be ERROR"
> > try(m1 > NULL)# gave  logical(0) in R <= 3.3.x --- an *error*
> now in R >= 3.4.0
> > tools::assertError(m1 & NULL)# gave and gives error
> > tools::assertError(m1 | double())# ditto
> > ## m2 was slightly different:
> > tools::assertError(m2 + NULL)
> > tools::assertError(m2 & NULL)
> > try(m2 == NULL) ## was logical(0) in R <= 3.3.x; now error as above!
>
> > 
> 
>
>
> > Note that in R's own  'nls'  sources, there was one case of
> > situation '2)' above, i.e. a  1x1-matrix was used as a "scalar".
>
> > In such cases, you should explicitly coerce it to a vector,
> > either ("self-explainingly") by  as.vector(.), or as I did in
> > the nls case  by  c(.) :  The latter is much less
> > self-explaining, but nicer to read in mathematical formulae, and
> > currently also more efficient because it is a .Primitive.
>
> > Please use R-devel with your code, and let us know if you see
> > effects that seem adverse.
>
> I've been slightly surprised (or even "frustrated") by the empty
> reaction on our R-devel list to this post.
>
> I would have expected some critique, may be even some praise,
> ... in any case some sign people are "thinking along" (as we say
> in German).
>
> In the mean time, I've actually thought along the one case which
> is last above:  The   (binary operation) between a
> non-0-length array and a 0-length vector (and NULL which should
> be treated like a 0-length vector):
>
> R <= 3.3.1  *is* quite inconsistent with these:
>
>
> and my proposal above (implemented in R-devel, since Sep.5) would give an
> error for all these, but instead, R really could be more lenient here:
> A 0-length result is ok, and it should *not* inherit the array
> (dim, dimnames), since the array is not of length 0. So instead
> of the above [for the very last part only!!], we would aim for
> the following. These *all* give an error in current R-devel,
> with the exception of 'm1 + NULL' which "only" gives a "bad
> warning" :
>
> 
>
> m1 <- matrix(1,1)
> m2 <- matrix(1,2)
>
> m1 + NULL #numeric(0) in R <= 3.3.x ---> OK ?!
> m1 > NULL #logical(0) in R <= 3.3.x ---> OK ?!
> try(m1 & NULL)# ERROR in R <= 3.3.x ---> change to logical(0)  ?!
> try(m1 | double())# ERROR in R <= 3.3.x ---> change to logical(0)  ?!
> ## m2 slightly different:
> try(m2 + NULL)  # ERROR in R <= 3.3.x ---> change to double(0)  ?!
> try(m2 & NULL)  # ERROR in R <= 3.3.x ---> change to logical(0)  ?!
> m2 == NULL # logical(0) in R <= 3.3.x ---> OK ?!
>
> 
>
> This would be slightly more back-compatible than the currently
> implemented proposal. Everything else I said remains true, and
> I'm pretty sure most changes needed in packages would remain to be done.
>
> Opinions ?
>
>
>
> > In some case where R-devel now gives an error but did not
> > previously, we could contemplate giving another  "warning
> >  'to become ERROR'" if there was too much breakage,  though
> > I don't expect that.
>
>
> > For the R Core Team,
>
> > Martin Maechler,
> > ETH Zurich
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>



-- 
Robin Hankin
Neutral theorist
hankin.ro...@gmail.com

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Re: [Rd] R (development) changes in arith, logic, relop with (0-extent) arrays

2016-09-08 Thread robin hankin
Could we take a cue from min() and max()?

> x <- 1:10
> min(x[x>7])
[1] 8
> min(x[x>11])
[1] Inf
Warning message:
In min(x[x > 11]) : no non-missing arguments to min; returning Inf
>

As ?min says, this is implemented to preserve transitivity, and this
makes a lot of sense.
I think the issuing of a warning here is a good compromise; I can
always turn off warnings if I want.

I find this behaviour of min() and max() to be annoying in the *right*
way: it annoys me precisely when I need to be
annoyed, that is, when I haven't thought through the consequences of
sending zero-length arguments.


On Fri, Sep 9, 2016 at 6:00 AM, Paul Gilbert  wrote:
>
>
> On 09/08/2016 01:22 PM, Gabriel Becker wrote:
>>
>> On Thu, Sep 8, 2016 at 10:05 AM, William Dunlap  wrote:
>>
>>> Shouldn't binary operators (arithmetic and logical) should throw an error
>>> when one operand is NULL (or other type that doesn't make sense)?  This
>>> is
>>> a different case than a zero-length operand of a legitimate type.  E.g.,
>>>  any(x < 0)
>>> should return FALSE if x is number-like and length(x)==0 but give an
>>> error
>>> if x is NULL.
>>>
>> Bill,
>>
>> That is a good point. I can see the argument for this in the case that the
>> non-zero length is 1. I'm not sure which is better though. If we switch
>> any() to all(), things get murky.
>>
>> Mathematically, all(x<0) is TRUE if x is length 0 (as are all(x==0), and
>> all(x>0)), but the likelihood of this being a thought-bug on the author's
>> part is exceedingly high, imho.
>
>
> I suspect there may be more R users than you think that understand and use
> vacuously true in code. I don't really like the idea of turning a perfectly
> good and properly documented mathematical test into an error in order to
> protect against a possible "thought-bug".
>
> Paul
>
>
> So the desirable behavior seems to depend
>>
>> on the angle we look at it from.
>>
>> My personal opinion is that x < y with length(x)==0 should fail if
>> length(y)
>>>
>>> 1, at least, and I'd be for it being an error even if y is length 1,
>>
>> though I do acknowledge this is more likely (though still quite unlikely
>> imho) to be the intended behavior.
>>
>> ~G
>>
>>>
>>> I.e., I think the type check should be done before the length check.
>>>
>>>
>>> Bill Dunlap
>>> TIBCO Software
>>> wdunlap tibco.com
>>>
>>> On Thu, Sep 8, 2016 at 8:43 AM, Gabriel Becker 
>>> wrote:
>>>
>>>> Martin,
>>>>
>>>> Like Robin and Oliver I think this type of edge-case consistency is
>>>> important and that it's fantastic that R-core - and you personally - are
>>>> willing to tackle some of these "gotcha" behaviors. "Little" stuff like
>>>> this really does combine to go a long way to making R better and better.
>>>>
>>>> I do wonder a  bit about the
>>>>
>>>> x = 1:2
>>>>
>>>> y = NULL
>>>>
>>>> x < y
>>>>
>>>> case.
>>>>
>>>> Returning a logical of length 0 is more backwards compatible, but is it
>>>> ever what the author actually intended? I have trouble thinking of a
>>>> case
>>>> where that less-than didn't carry an implicit assumption that y was
>>>> non-NULL.  I can say that in my own code, I've never hit that behavior
>>>> in
>>>> a
>>>> case that wasn't an error.
>>>>
>>>> My vote (unless someone else points out a compelling use for the
>>>> behavior)
>>>> is for the to throw an error. As a developer, I'd rather things like
>>>> this
>>>> break so the bug in my logic is visible, rather than  propagating as the
>>>> 0-length logical is &'ed or |'ed with other logical vectors, or used to
>>>> subset, or (in the case it should be length 1) passed to if() (if throws
>>>> an
>>>> error now, but the rest would silently "work").
>>>>
>>>> Best,
>>>> ~G
>>>>
>>>> On Thu, Sep 8, 2016 at 3:49 AM, Martin Maechler <
>>>> maech...@stat.math.ethz.ch>
>>>> wrote:
>>>>
>>>>>>>>>> robin hankin 
>>>>>>>>>> on Thu, 8 Sep 2016 10:05:21 +12

Re: [Rd] Unexpected behavior of '[' in an apply instruction

2021-02-12 Thread robin hankin
Rui

> x <- array(runif(60), dim = c(10, 2, 3))
> array(x[slice.index(x,1) %in% 1:5],c(5,dim(x)[-1]))

(I don't see this on stackoverflow; should I post this there too?)  Most of
the magic package is devoted to handling arrays of arbitrary dimensions and
this functionality might be good to include if anyone would find it useful.

HTH

Robin





On Sat, Feb 13, 2021 at 12:26 AM Rui Barradas  wrote:

> Hello,
>
> This came up in this StackOverflow post [1].
>
> If x is an array with n dimensions, how to subset by just one dimension?
> If n is known, it's simple, add the required number of commas in their
> proper places.
> But what if the user doesn't know the value of n?
>
> The example below has n = 3, and subsets by the 1st dim. The apply loop
> solves the problem as expected but note that the index i has length(i) > 1.
>
>
> x <- array(1:60, dim = c(10, 2, 3))
>
> d <- 1L
> i <- 1:5
> apply(x, MARGIN = -d, '[', i)
> x[i, , ]
>
>
> If length(i) == 1, argument drop = FALSE doesn't work as I expected it
> to work, only the other way does:
>
>
> i <- 1L
> apply(x, MARGIN = -d, '[', i, drop = FALSE)
> x[i, , drop = FALSE]
>
>
> What am I missing?
>
> [1]
>
> https://stackoverflow.com/questions/66168564/is-there-a-native-r-syntax-to-extract-rows-of-an-array
>
> Thanks in advance,
>
> Rui Barradas
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] eigen(symmetric=TRUE) for complex matrices

2013-06-17 Thread robin hankin
R-3.0.1 rev 62743, binary downloaded from CRAN just now; macosx 10.8.3

Hello,

eigen(symmetric=TRUE) behaves strangely when given complex matrices.


The following two lines define 'A', a 100x100 (real) symmetric matrix
which theoretical considerations [Bochner's theorem] show to be positive
definite:

jj <- matrix(0,100,100)
A <- exp(-0.1*(row(jj)-col(jj))^2)


A's being positive-definite is important to me:


  > min(eigen(A,T,T)$values)
[1] 2.521153e-10
>

Coercing A to a complex matrix should make no difference, but makes
eigen() return the wrong answer:

> min(eigen(A+0i,T,T)$values)
[1] -0.359347
>

This is very, very wrong.

I would expect these two commands to return identical values, up to
numerical precision.   Compare svd():


> dput(min(eigen(A,T,T)$values))
2.52115250343783e-10
> dput(min(eigen(A+0i,T,T)$values))
-0.359346984206908
> dput(min(svd(A)$d))
2.52115166468044e-10
> dput(min(svd(A+0i)$d))
2.52115166468044e-10
>

So svd() doesn't care about the coercion to complex.  The 'A' matrix
isn't particularly badly conditioned:


> eigen(A,T)$vectors -> e
> crossprod(e)[1:4,1:4]

also:

> crossprod(A,solve(A))


[and the associated commands with A+0i in place of A], give errors of
order 1e-14 or less.


I think the eigenvectors are misbehaving too:

> eigen(A,T)$vectors -> ev1
> eigen(A+0i,T)$vectors -> ev2
> range(Re((A %*% ev1[,100])/ev1[,100]))
[1] 2.497662e-10 2.566555e-10   # min=max mathematically;
differences due to numerics
> range(Re((A %*% ev2[,100])/ev2[,100]))
[1] -19.407290   4.412938   # off the scale errors
[note the difference in sign]
>


FWIW, these problems do not appear to occur if symmetric=FALSE:

> min(Re(eigen(A+0i,F,T)$values))
[1] 2.521153e-10
> min(Re(eigen(A,F,T)$values))
[1] 2.521153e-10
>

and the eigenvectors appear to behave themselves too.


Also, can I raise a doco?  The documentation for eigen() is not
entirely transparent with regard to the 'symmetric' argument.  For
complex matrices, 'symmetric' should read 'Hermitian':


> B <- matrix(c(2,1i,-1i,2),2,2)   # 'B' is Hermitian
> eigen(B,F,T)$values
[1] 3+0i 1+0i
> eigen(B,T,T)$values# answers agree as expected if 'symmetric' means
'Hermitian'
[1] 3 1



> C <- matrix(c(2,1i,1i,2),2,2)# 'C' is symmetric
> eigen(C,F,T)$values
[1] 2-1i 2+1i
> eigen(C,T,T)$values  # answers disagree because 'C' is not Hermitian
[1] 3 1
>





-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] gsl package on mavericks

2014-10-01 Thread robin hankin
hello

I maintain the gsl R package, and many users have recently reported that
the package
does not install from source under macosx 10.9 ("mavericks").

Users typically install the gnu GSL library and are able to compile and run
a small "hello world" program which executes some of the Bessel functionality
of the library; but under mavericks the configure script (which uses
gsl-config as a probe) does not seem to detect which version of the
installed library is, giving a "Need GSL version >= 1.12" error.  The most
recent version of the gnu GSL library is 1.16.

The CRAN package check page shows that the gsl R package is clean under
every other architecture.

There is a thread on stackoverflow about this very issue:

http://stackoverflow.com/questions/24781125/installing-r-gsl-package-on-mac

where several people post either workarounds or suggestions.  However, it
is not clear whether there is some defect in the configure.ac script, or
the problem is due to mavericks, or it might even lie in newer versions of
the gnu GSL library.

The package gsl_1.9-10.tar.gz  installs correctly from source for me on my
system, macosx 10.9.4, R-3.1.1, GSL-1.16, so it is difficult for me to
investigate users' reports.


Can anyone advise?


-- 
Robin Hankin
Neutral theorist
hankin.ro...@gmail.com

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] seq_along and rep_along

2012-01-08 Thread robin hankin
hello folks

[snip]

> but it is frustrating when base
> functionality only works with vectors, not matrices, or arrays. It
> would be more compelling if (e.g.) t and rev also had dimension
> arguments.
>
> Hadley
>
> --

well put!  I would add, though, that t() generalizes to aperm(),
and the magic package contains  arev()  which is a generalization
of rev().

I'm always on the lookout for other array functionality of this type
that might sit well with magic.  Anyone?

best wishes

Robin




> Assistant Professor / Dobelman Family Junior Chair
> Department of Statistics / Rice University
> http://had.co.nz/
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] seq_along and rep_along

2012-01-08 Thread robin hankin
hello Hadley

thanks for this...


> There are the flip operators of matlab, and rotating matrices/array by
> multiples of 90 degrees.
>

arot() in the magic package does this (which is an operation
frequently encountered in magic hypercubes)


>> I'm always on the lookout for other array functionality of this type
>> that might sit well with magic.  Anyone?
>
> Have you considered pulling out the matric manipulation functions from
> magic?  I think they'd do well in their own package, and would be more
> findable.
>


That is a very good idea.  I have fought shy of this because the array
functionality of the magic package didn't seem to be enough to
justify a package of its own, but maybe that isn't true any more.
And I must say that the majority of user comments on the magic package
are in relation to functions such as arot() and adiag()  and apad()
and aplus() etc etc that are not specific to magic hypercubes.

Does the List have any comments?

rksh





> Hadley
>
> --
> Assistant Professor / Dobelman Family Junior Chair
> Department of Statistics / Rice University
> http://had.co.nz/



-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] columnames changes behaviour of formula

2012-05-24 Thread robin hankin
Hello. precompiled R-2.15.0, svn58871, macosx 10.7.4.


I have discovered that defining column names of a dataframe can alter the
behaviour of lm():


d <- c(4,7,6,4)
x <- data.frame(cbind(0:3,5:2))
coef(lm(d~ -1 + (.)^2,data=x))
   X1X2 X1:X2
-1.77  0.83  1.25
R>
R>


OK, so far so good.  But change the column names of 'x' and the behaviour
changes:


colnames(x) <- c("d","nd")   # 'd' == 'death' and 'nd' == 'no death'
coef(lm(d~ -1 + (.)^2,data=x))
   nd
0.2962963



I am not sure if this is consistent with the special meaning of '.'
described under ?formula.

Is this the intended behaviour?


-- 
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] file.system() in packages

2013-01-21 Thread robin hankin
Hello.  R-devel, r61697.

I am having difficulty interpreting section 1.4 "Writing package
vignettes" of the R-exts manual.  Specifically, I want to use
system.file() in some of my packages to locate a bib file,
uncertainty.bib, which is part of the emulator package.  I only want
to maintain a single .bib file.

R-exts says: "All other files needed to re-make the vignette PDFs
(such as ... BiBTeX input files) must in the vignette source
directory".  So I've put my Rnw file and also the bib file in
emulator/vignettes/ directory.

And indeed, following R CMD build and then R CMD INSTALL on the
tarball, I can see the Rnw file:

> system.file("doc","emulex.Rnw",package="emulator")
[1] "/home/rksh/Rd/lib64/R/library/emulator/doc/emulex.Rnw"
>

so I know system.file() works as desired.  But I can't find
uncertainty.bib as R CMD INSTALL does not copy it to the system:

> system.file("doc","uncertainty.bib",package="emulator")
[1] ""
>

So I can't retrieve uncertainty.bib and this means that several other
packages can't bibtex correctly.  Can anyone advise?


--
Robin Hankin
Uncertainty Analyst
hankin.ro...@gmail.com

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] bundle deprecation

2009-06-12 Thread Robin Hankin

Hi

I read that bundles are to be deprecated in 2.10.

The BACCO bundle contains three packages
(emulator, calibrator, approximator) which I
am happy to unbundle.

But the 'BACCO' moniker has some considerable
cachet for me in terms of recognizability (eg
with grant-giving bodies), as it has become an umbrella
term for a whole bunch of related statistical
functionality of which the three packages are examples.

I make heavy use of the word "BACCO" in my publications.

If bundles were to be supported indefinitely, I
would add further packages to the BACCO
bundle from time to time and their relationship
with the other packages would be clear.

What is the best way to preserve the 'BACCO'
name in a non-bundled world?

Perhaps adding a 'FormerBundle' line in the DESCRIPTION file?


best wishes

Robin




--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 objects in the data directory

2009-11-12 Thread Robin Hankin

Hi

[R-2.10.0; suse linux]

I am having difficulty creating S4 objects in the data directory of a 
package. I want
to create a bunch of simple S4 objects for use in the examples section 
of the Rd files.


It says in R-exts that:

" R code should be “self-sufficient” and not make use of extra 
functionality provided by the package, so that the data file can also be 
used without having to load the package"


My current minimal self-contained example follows.

le112:~% cat ./anRpackage/R/f.R
setClass("foo", representation=representation(x="numeric"))

le112:~% cat ./anRpackage/data/toy_foo.R
"toy_foo" <- new("foo",x=rep(2,7))

fails R CMD check (transcript below)

It fails because the class isn't defined. I can
add 'require(anRpackage)' to the toy_foo.R
but I'm not sure if this is consistent with the above
advice in R-exts (and it
fails R CMD check anyway).

What is best practice for creating a package with
"toy" S4 objects for use in the examples section
of the Rd files?


cheers


Robin








le112:~% R CMD check ./anRpackage
* checking for working pdflatex ... OK

[snip]

* checking data for non-ASCII characters ... NOTE
Error: "foo" is not a defined class
Call sequence:
8: stop(gettextf("\"%s\" is not a defined class", Class), domain = NA)
7: getClass(Class, where = topenv(parent.frame()))
6: new("foo", x = rep(2, 7))
5: eval(expr, envir, enclos)
4: eval(i, envir)
3: sys.source(zfile, chdir = TRUE, envir = envir)
2: switch(ext, R = , r = {
library("utils")
sys.source(zfile, chdir = TRUE, envir = envir)
}, RData = , rdata = , rda = load(zfile, envir = envir), TXT = ,
txt = , tab = , tab.gz = , tab.bz2 = , tab.xz = , txt.gz = ,
txt.bz2 = , txt.xz = assign(name, read.table(zfile, header = TRUE,
as.is = FALSE), envir = envir), CSV = , csv = , csv.gz = ,
csv.bz2 = , csv.xz = assign(name, read.table(zfile, header = TRUE,
sep = ";", as.is = FALSE), envir = envir), found <- FALSE)
1: utils::data(list = f, package = character(0L), envir = dataEnv)
Execution halted
Portable packages use only ASCII characters in their datasets.
* checking examples ...no parsed files found
NONE

WARNING: There were 2 warnings, see
/home/rksh/anRpackage.Rcheck/00check.log
for details

le112:~%



--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 and head() problems

2009-12-03 Thread Robin Hankin

Hi

I am having difficulty defining an S4 method for head() and tail().

I can't quite provide minimal self-contained code
because the problem appears to require the whole corpus
of my package; and there also appears to be a difference
between sourcing the lines directly, and having them
installed in a package.

The lines in question (I think) are:

setClass("mdm",
representation = representation(
  xold  = "matrix",
  types = "factor"
  )
)

"mdm" <- function(xold, types){
 new("mdm", xold=xold, types=types)
}

setGeneric("head",function(x,...){standardGeneric("head")})
setMethod("head",signature="mdm",function(x,n=6,...){
 mdm(head(x...@xold,n=n,...) , types=factor(head(x...@types,n=n,...)))
} )


If the above lines are part of the package source, and I install the package
then sometimes  I get errors like

> head(toy_mm())
Error in function (classes, fdef, mtable)  :
 unable to find an inherited method for function "head", for signature 
"matrix"

>

and sometimes it works as desired.

Why should head() not be able to take the first few lines of a matrix?
It seems to be "forgetting" that head.matrix() exists.

Can anyone give me some pointers for debugging this problem?

rksh







--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RFC: lchoose() vs lfactorial() etc

2009-12-15 Thread Robin Hankin

Hi Martin

I think you're absolutely right about this;
One thing I need again and again is
a multinomial function, and usually define:

> lmultinomial
function (x)
{
   lfactorial(sum(x)) - sum(lfactorial(x))
}

> multinomial
function (x)
{
   exp(lmultinomial(x))
}


It would be nice to have this in base R.

Is this the place to discuss having complex
arguments for gamma()?


best wishes

rksh






Martin Maechler wrote:

lgamma(x) and lfactorial(x) are defined to return

 ln|Gamma(x)| {= log(abs(gamma(x)))} or  ln|Gamma(x+1)| respectively.

Unfortunately,  we haven't chosen the analogous definition for 
lchoose().


So, currently 


  > lchoose(1/2, 1:10)
   [1] -0.6931472 -2.0794415NaN -3.2425924NaN -3.8869494
   [7]NaN -4.3357508NaN -4.6805913
  Warning message:
  In lchoose(n, k) : NaNs produced
  > 


which (the NaN's) is not particularly useful.
(I have use case where I really have to workaround those NaNs.)

I herebey propose to *amend* the definition of lchoose() such
that it behaves analogously to  lgamma() and lfactorial(),
i.e., to return

   log(abs(choose(.,.))

Your comments are welcome.
Martin Maechler, ETH Zurich

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
  



--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RFC: lchoose() vs lfactorial() etc

2009-12-15 Thread Robin Hankin


Martin Becker wrote:

Robin Hankin wrote:

...
Is this the place to discuss having complex
arguments for gamma()?
...
If this discussion starts I would second the wish for the 
functionality of gsl's lngamma_complex in base R.





Do you mean gsl or GSL? ;-)

[the GNU scientific library is 'GSL'; the R package is 'gsl']

There are a few functions in gsl that don't take  complex  arguments,
such as airy_Ai() and airy_Bi(),  but for me the top priority would
be the incomplete gamma function.

best wishes

rksh





--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S3 best practice

2007-03-02 Thread Robin Hankin
Hello everyone

Suppose I have an S3 class "dog" and a function plot.dog() which
looks like this:

plot.dog <- function(x,show.uncertainty, ...){
 
   if (show.uncertainty){
   
   }
}


I think that it would be better to somehow precalculate the
uncertainty stuff and plot it separately.

How best to do this
in the context of an S3 method for plot()?

What is Best Practice here?



--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD Rd2dvi

2007-03-19 Thread Robin Hankin
Hello

perhaps this is a bit late for R-2.5.0,
but does anyone find R CMD Rd2dvi refusing
to overwrite an existing dvi file useful?

I am forever leaving the .dvi file in
my scratch directory and have to
remove it manually.

I know that one can specify --output=f.dvi,
but this just defers the problem
because f.dvi won't be overwritten either.

Does anyone value this behaviour?

If not, could we change it?


For me, the typical session goes:

octopus:~/scratch% R CMD Rd2dvi ./BACCO
Hmm ... looks like a package bundle
file 'BACCO.dvi' exists; please remove first

octopus:~/scratch% rm BACCO.dvi
octopus:~/scratch% R CMD Rd2dvi ./BACCO
Hmm ... looks like a package bundle
[snip].






--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] no visible binding for global variable

2007-04-17 Thread Robin Hankin
Hello everyone

I am trying to get one of my packages through R's QC.

The package is clean for me under  R-2.4.1, R-2.5.0, and
R-devel,  but Kurt gets


>
> * checking R code for possible problems ... WARNING
>  hypercube: no visible binding for global variable ‘f’



Function hypercube() [cut-&-pasted below] is intended to
return an adjacency matrix for an n-dimensional
hypercube with 2^n nodes.  hypercube(n) returns a
2^n -by- 2^n matrix, and works as intended for me.

Can someone explain what the error message means?






"hypercube" <- function(n){

   jj <- as.matrix(expand.grid(rep(list(0:1),n)))

   wrapper <- function(x, y, my.fun) {
 f <- function(x,y,tol=1e-4){
   abs(sum(abs(jj[x,]-jj[y,]))-1)https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] system() in packages

2007-04-27 Thread Robin Hankin
Hello

Quite often I need higher precision, or larger numbers,
  than IEEE double precision allows.  One strategy I sometimes
use is to use system() to call pari/gp, which is not constrained by  
IEEE.

[pari/gp is a GPL high-level mathematical programming language
geared towards pure mathematics]

Two of my packages contain lines like the following:

 > system(" echo '1.12^66' | gp -q f",intern=TRUE)
[1]  
"1771.697189476241729649767636564084681203806302318041262248838950177194 
116346432205160921568393661760"

Note the high precision of the answer.

My question is, how to deal with the possibility that pari/gp is not
installed?

If the system cannot find gp for some reason, I get:

 > system(" echo '1.12^66' | gp -q f",intern=TRUE)
sh: line 1: gp: command not found
character(0)
 >

What's the recommended way to handle this eventuality gracefully?  The
functions that do use pari/gp have "pure" R equivalents (but much
slower and less accurate)  so I want users to be able to install the
package without pari/gp.


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] two bessel function bugs for nu<0

2007-06-19 Thread Robin Hankin
I can reproduce both these bugs and confirm that the suggested fix
agrees with Mathematica and Maple for a few trial values.

I can confirm that Hiroyuki's algebra is indeed
consistent with AMS-55 equation 9.1.2
and the old source isn't.   I'd need more
time to look at equation 9.6.2.

I'm not sure why, in bessel_i.c, we are using a float ("expo")
and a long ("ize") as a Boolean [flag to indicate whether or not
to return scaled function values].

PS1:
My first thought was to check against the GSL library
but  this doesn't allow non-integer orders for besselI()

PS2: The source code apologizes for the method used,
suggesting that it may be numerically and computationally
"sub-optimal".

Best wishes

rksh







On 18 Jun 2007, at 23:33, Hiroyuki Kawakatsu wrote:

> #bug 1: besselI() for nu<0 and expon.scaled=TRUE
> #tested with R-devel (2007-06-17 r41981)
> x <- 2.3
> nu <- -0.4
> print(paste(besselI(x, nu, TRUE), "=", exp(-x)*besselI(x, nu, FALSE)))
> #fix:
> #$ diff bessel_i_old.c bessel_i_new.c
> #57c57
> #<   bessel_k(x, -alpha, expo) * ((ize == 1)? 2. : 2.*exp(-x))/M_PI
> #---
> #>   bessel_k(x, -alpha, expo) * ((ize == 1)? 2. : 2.*exp(-2.0*x))/ 
> M_PI
>
> #bug 2: besselY() for nu<0
> #don't know how to check in R; a few random checks against  
> mathematica 5.2
> #fix:
> #$ diff bessel_y_old.c bessel_y_new.c
> #55c55
> #<  return(bessel_y(x, -alpha) + bessel_j(x, -alpha) * sin(-M_PI *  
> alpha));
> #---
> #>  return(bessel_y(x, -alpha) * cos(M_PI * alpha) - bessel_j(x,
> -alpha) * sin(M_PI * alpha));
>
> h.
> -- 
> --
> Hiroyuki Kawakatsu
> Business School
> Dublin City University
> Dublin 9, Ireland
> Tel +353 (0)1 700 7496
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] inherits() and virtual classes

2007-06-27 Thread Robin Hankin
Hi

How do I test for an object having a particular virtual class?

In the following, "onion" is a virtual class, and "octonion" is
a non-virtual class contained in onion.  The last call
to inherits() gives FALSE [R-2.5.0], when inherits.Rd led
me to expect TRUE.




setClass("onion",
  representation = "VIRTUAL"
  )

setClass("octonion",
  representation = representation(x="matrix"),
  prototype  = list(x=matrix(numeric(),0,8)),
  contains   = "onion"
  )


jj <- new("octonion",x=as.matrix(1:8))

inherits(jj,"onion")




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] signature() and setMethod() problems

2007-06-28 Thread Robin Hankin
I am having difficulty using signature().

I have one virtual class  (onion) and
two nonvirtual classes (quaternion and octonion).
containing onion.

I want to define three distinct sets of
arithmetic operations: one for onion-onion,
one for onion-real, and one for real-onion [this
is more computationally efficient than
coercing reals to onions and then using onion-onion
operations].

Executing the following code gives an error [R-2.5.0]
at the first call to setMethod():

Error in match.call(fun, fcall) : unused argument(s) (o1 = "onion",  
o2 = "onion")


Why is this, and what would the List suggest
is Best Practice here?





setClass("onion",
  representation = "VIRTUAL"
  )

setClass("quaternion",
  representation = representation(x="matrix"),
  prototype  = list(x=matrix(numeric(),0,4)),
  contains   = "onion"
  )

setClass("octonion",
  representation = representation(x="matrix"),
  prototype  = list(x=matrix(numeric(),0,8)),
  contains   = "onion"
  )

".onion.onion.arith" <- function(o1,o2){stop("OO not implemented")}
".onion.real.arith" <- function(o,r){stop("OR not implemented")}
".real.onion.arith" <- function(r,o){stop("RO not implemented")}

setMethod("Arith", signature 
(o1="onion",o2="onion" ), .onion.onion.arith)
setMethod("Arith", signature(o="onion",r="ANY" ), .onion.real.arith)
setMethod("Arith", signature(r="ANY",o="onion" ), .real.onion.arith)



--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] sweep sanity checking?

2007-07-12 Thread Robin Hankin
Hi

Brian Ripley, Heather Turner, and myself discussed this
issue at some length in a thread starting 20 June 2005 (how
do folk give a URL that points to a thread?).

The consensus was that adding a warning level option to sweep
was a good idea; Heather posted a version of sweep that implemented
this, which she posted on 21 June 2005.


rksh




On 12 Jul 2007, at 09:16, Petr Savicky wrote:

> The suggestion sounds reasonable to me. Let me add that sweep is  
> written
> to work even if MARGIN includes more than one dimension. To handle  
> these
> cases correctly, the test may be replaced e.g. by
>  if (check.margin && prod(dims[MARGIN])!=length(STATS)) {
>warning("length(STATS) != prod(dim(x)[MARGIN])")
>  } else if (prod(dims[MARGIN]) %% length(STATS)!=0)
>warning("prod(dim(x)[MARGIN]) is not a multiple of length 
> (STATS)")
> or even by
>  dimstat <- if (is.null(dim(STATS))) length(STATS) else dim(STATS)
>  if (check.margin && any(dims[MARGIN]!=dimstat)) {
>warning("length(STATS) or dim(STAT) do not match dim(x) 
> [MARGIN]")
>  } else if (prod(dims[MARGIN]) %% length(STATS)!=0)
>warning("prod(dim(x)[MARGIN]) is not a multiple of length 
> (STATS)")
>
> Petr.
>
>> Just an opinion from an R user: I think it's a sound idea.  I use  
>> my own
>> version of sweep with a stricter check: it stops if the vector is not
>> exactly the right length.
>>
>> -- Tony Plate
>>
>> Ben Bolker wrote:
>>> Ben Bolker  zoo.ufl.edu> writes:
>>>
>>>
>>>>   What would R-core think of the following 'enhanced'
>>>> sweep?
>>>>
>>>
>>>  (now posted at
>>> http://wiki.r-project.org/rwiki/doku.php?id=rdoc:base:sweep
>>> )
>>>
>>> It always warns if dim(x)[MARGIN] is
>>>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] (PR#9811) sequence(c(2, 0, 3)) produces surprising results,

2007-07-27 Thread Robin Hankin

On 27 Jul 2007, at 08:07, [EMAIL PROTECTED] wrote:

> This is as doumented, and I think you could say the same thing of  
> seq().
> BTW, sequence() allows negative inputs, and I don't think you want
> sum(input) in that case.
>
> I've never seen the point of sequence(), but it has been around in  
> R for a
> long time.  It is used in packages eRm, extRemes, hydrosanity,  
> klaR, seas.
> Who knows what people have in private code, so I don't see any  
> compelling
> case to change it.  If people want a different version, it would  
> only take
> a minute to write (see below).
>
> We could make seq_len take a vector argument, but as you point out  
> in a
> followup that makes it slower in the common case.  It also changes its
> meaning if a length > 1 vector is supplied, and would speed matter  
> in the
> long-vector case?  What does
>
> sequence0 <- function (nvec)
> {
>  s <- integer(0)
>  for (i in nvec) s <- c(s, seq_len(i))
>  s
> }
>
> not do that is more than a very rare need?
>


My 2 cents:

  Defining

mySequence <- function(x){unlist(sapply(x,function(i){seq_len 
(i)}))}

is much faster.

Neither sequence0()  nor  mySequence() accepts vectors with any  
element <0
although as Brian Ripley points out, sequence() itself does (which I  
think is
undesirable).






Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R-2.6.0 package check problems

2007-10-05 Thread Robin Hankin
Hello


One of my packages, untb_1.3-2, passes R CMD check under
MacOSX (and apparently the systems used in the package check
summary page on CRAN) but fails with the following message on
R-2.6.0.tgz compiled last night on my (home) linux box.  I hasten
to add that I have never seen this error before on home-compiled
pre-releases of R-2.6.0.

Can anyone help me understand what is going on?


localhost:~/scratch%R CMD check untb_1.3-2.tgz

[snip]

creating untb-Ex.R ... OK
* checking examples ... ERROR
Running examples in 'untb-Ex.R' failed.
The error most likely occurred in:

 > ### * butterflies
 >
 > flush(stderr()); flush(stdout())
 >
 > ### Name: butterflies
 > ### Title: abundance data for butterflies
 > ### Aliases: butterflies butterfly
 > ### Keywords: datasets
 >
 > ### ** Examples
 >
 > data(butterflies)
 > plot(butterflies, uncertainty=TRUE)
Error in log(theta) :
   could not find symbol "base" in environment of the generic function
Calls: plot ... optimal.theta -> optimize ->  -> f -> log
Execution halted







localhost:~/scratch%R
 > sessionInfo()
R version 2.6.0 (2007-10-03)
i686-pc-linux-gnu

locale:
LC_CTYPE=en_US;LC_NUMERIC=C;LC_TIME=en_US;LC_COLLATE=en_US;LC_MONETARY=e 
n_US;LC_
MESSAGES=en_US;LC_PAPER=en_US;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_M 
EASUREME
NT=en_US;LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] rcompgen_0.1-15
 > R.version
_
platform   i686-pc-linux-gnu
arch   i686
os linux-gnu
system i686, linux-gnu
status
major  2
minor  6.0
year   2007
month  10
day03
svn rev43063
language   R
version.string R version 2.6.0 (2007-10-03)
 >
 >







--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-2.6.0 package check problems

2007-10-09 Thread Robin Hankin

On 5 Oct 2007, at 15:47, Robin Hankin wrote:

> Hello
>
>
> One of my packages, untb_1.3-2, passes R CMD check under
> MacOSX (and apparently the systems used in the package check
> summary page on CRAN) but fails with the following message on
> R-2.6.0.tgz compiled last night on my (home) linux box.  I hasten
> to add that I have never seen this error before on home-compiled
> pre-releases of R-2.6.0.
>
> Can anyone help me understand what is going on?
>
>


thanks everyone.   My problems were solved by following Peter D's
(offline) suggestion to update all the dependencies: he noted that
log() became generic in R-2.6.0; untb depends on Brobdingnag,
the newest version of which tests for log() being generic [using
isGeneric("log")] and executes different code
depending on the answer.

crisis over!







--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] vignettes and papers

2007-11-02 Thread Robin Hankin
Hello everyone

Lots of my packages have been the subject of
journal articles either in JSS or Rnews or (in one
case) elsewhere.

I would like to add these articles
to my packages as vignettes.

Reproducing the papers exactly requires a number
of files [such as style files or PDFs] to be included in
  the inst/doc directory to pass R CMD check.

A vanilla .Rnw file seems to be a good idea,
but loses some of the nice JSS typesetting.

What is Best Practice here?

And are there ethical or other issues that I should
be aware of  before including copies of Rnews
or JSS papers verbatim in an R package?



--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Friday question: negative zero

2007-12-07 Thread Robin Hankin
Hello everyone



On 1 Sep 2007, at 01:39, Duncan Murdoch wrote:

> The IEEE floating point standard allows for negative zero, but it's  
> hard
> to know that you have one in R.  One reliable test is to take the
> reciprocal.  For example,
>
>> y <- 0
>> 1/y
> [1] Inf
>> y <- -y
>> 1/y
> [1] -Inf
>
> The other day I came across one in complex numbers, and it took me a
> while to figure out that negative zero was what was happening:
>
>> x <- complex(real = -1)
>> x
> [1] -1+0i
>> 1/x
> [1] -1+0i
>> x^(1/3)
> [1] 0.5+0.8660254i
>> (1/x)^(1/3)
> [1] 0.5-0.8660254i
>
> (The imaginary part of 1/x is negative zero.)
>
> As a Friday question:  are there other ways to create and detect
> negative zero in R?
>
> And another somewhat more serious question:  is the behaviour of
> negative zero consistent across platforms?  (The calculations above  
> were
> done in Windows in R-devel.)
>



I have been pondering branch cuts and branch points
for some functions which I am implementing.

In this area, it is very important to know whether one has
+0 or -0.

Take the log() function, where it is sometimes
very important to know whether one is just above the
imaginary axis or just below it:



(i).  Small y

 > y <- 1e-100
 > log(-1 +  1i*y)
[1] 0+3.141593i
 > y <- -y
 > log(-1 +  1i*y)
[1] 0-3.141593i


(ii)  Zero y.


 > y <- 0
 > log(-1 +  1i*y)
[1] 0+3.141593i
 > y <- -y
 > log(-1 +  1i*y)
[1] 0+3.141593i
 >


[ie small imaginary jumps have  a discontinuity, infinitesimal jumps  
don't].

This behaviour is undesirable (IMO): one would like log (-1+0i) to be  
different from log(-1-0i).

Tony Plate's example shows that
even though  y<- 0 ;  identical(y, -y) is TRUE,  one has identical(1/ 
y, 1/(-y)) is FALSE,
  so the sign is not discarded.

My complex function does have a branch cut that follows a portion of  
the negative real axis
but the other cuts follow absurdly complicated implicit equations.

At this point
one needs the IEEE requirement that x=x == +0  [ie not -0] for any  
real x; one then finds that
(s-t) and -(t-s) are numerically equal but not necessarily  
indistinguishable.

One of my earlier questions involved branch cuts for the inverse trig  
functions but
(IIRC) the patch I supplied only tested for the imaginary part being  
 >0; would it be
possible to include information about signed zero in these or other  
functions?






> Duncan Murdoch
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] branch cuts of log() and sqrt()

2007-12-18 Thread Robin Hankin
Dear developers

Neither Math.Rd nor Log.Rd mention the branch cuts
that appear for complex arguments.  I think it's important
to include such information.

Please find following two context diffs for Log.Rd and Math.Rd.

[The pedants amongst us will observe that
both sqrt() and log() have a branch point at complex
infinity, which is not mentioned in the patch.  Comments
anyone?]



rksh




245-10:~/scratch/R-devel/src/library/base/man% diff -c  Log.Rd  
new_Log.Rd
*** Log.Rd  Fri Jul 27 16:51:42 2007
--- new_Log.Rd  Tue Dec 18 08:57:03 2007
***
*** 66,71 
--- 66,75 
 \code{logb} is a wrapper for \code{log} for compatibility with  
S.  If
 (S3 or S4) methods are set for \code{log} they will be dispatched.
 Do not set S4 methods on \code{logb} itself.
+
+   For complex arguments, the branch cut is standard: there is a branch
+   point at zero and a cut along the negative real axis; continuity
+   is from above.
   }
   \section{S4 methods}{
 \code{exp}, \code{expm1}, \code{log}, \code{log10}, \code{log2} and
245-10:~/scratch/R-devel/src/library/base/man%




245-10:~/scratch/R-devel/src/library/base/man% diff -c  Math.Rd  
new_Math.Rd
*** Math.Rd Fri Jul 27 16:51:44 2007
--- new_Math.Rd Tue Dec 18 09:01:35 2007
***
*** 22,32 
   \details{
 These are generic functions: methods can be defined for them
 individually or via the \code{\link[base:groupGeneric]{Math}}
!   group generic.  For complex arguments (and the default method),  
\code{z},
!   \code{abs(z) == \link{Mod}(z)} and \code{sqrt(z) == z^0.5}.

 \code{abs(x)} returns an \code{\link{integer}} vector when \code 
{x} is
 \code{integer} or \code{\link{logical}}.
   }
   \section{S4 methods}{
 Both are S4 generic and members of the
--- 22,39 
   \details{
 These are generic functions: methods can be defined for them
 individually or via the \code{\link[base:groupGeneric]{Math}}
!   group generic.

 \code{abs(x)} returns an \code{\link{integer}} vector when \code 
{x} is
 \code{integer} or \code{\link{logical}}.
+
+   For complex arguments (and the default method), \code{z},
+   \code{abs(z) == \link{Mod}(z)} and \code{sqrt(z) == z^0.5}.
+
+   The branch cut of \code{sqrt()} is standard: there is a branch point
+   at zero and a cut along the negative real axis; continuity is from
+   above.
+
   }
   \section{S4 methods}{
 Both are S4 generic and members of the
245-10:~/scratch/R-devel/src/library/base/man%



--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S3 vs S4 for a simple package

2008-01-07 Thread Robin Hankin
I am writing a package and need to decide whether to use S3 or S4.

I have a single class, "multipol"; this needs methods for "[" and "[<-"
and I also need a print (or show) method and methods for arithmetic +- 
*/^.

In S4, an object of class "multipol" has one slot that holds an array.

Objects of class "multipol" require specific arithmetic operations;  
a,b being
multipols means that a+b and a*b are defined in peculiar ways
that make sense in the context of the package. I can also add and  
multiply
by scalars (vectors of length one).

My impression is that S3 is perfectly adequate for this task, although
I've not yet finalized the coding.

S4 seems to be "overkill" for such a simple system.

Can anyone give me some motivation for persisting with S4?

Or indeed reassure me that S3 is a good design decision?



--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] as.function()

2008-01-14 Thread Robin Hankin
Hi

[this after some considerable thought as to R-help vs R-devel]



I want to write a (S3) method for as.function();
toy example follows.

Given a matrix "a", I need to evaluate trace(ax) as a function of
(matrix) "x".

Here's a trace function:

tr <-  function (a)  {
 i <- seq_len(nrow(a))
 return(sum(a[cbind(i, i)]))
}


How do I accomplish the following:


a <- crossprod(matrix(rnorm(12),ncol=3))
class(a) <- "foo"

f <- as.function(a)   # need help to write as.function.foo()
x <- diag(3)

f(x) #should give tr(ax)

a <- 4
f(x)   # should still give tr(ax) even though "a" has been  
reassigned.





[my real example is very much more complicated than this but
I need this toy one too and I can't see how to modify  
as.function.polynomial()
to do what I want]




--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] as.function()

2008-01-14 Thread Robin Hankin
Antonio


thanks for your help here, but it doesn't answer my question.

Perhaps if I outline my motivation it would help.


I want to recreate the ability of
the "polynom" package to do the following:


 > library(polynom)
 > p <- polynomial(1:4)
 > p
1 + 2*x + 3*x^2 + 4*x^3
 > MySpecialFunction <- as.function(p)
 > MySpecialFunction(1:10)
  [1]   10   49  142  313  586  985 1534 2257 3178 4321
 > p <- 4
 > MySpecialFunction(1:10)
  [1]   10   49  142  313  586  985 1534 2257 3178 4321
 >


See how the user can define object "MySpecialFunction",
  which outlives short-lived polynomial "p".

Unfortunately, I don't see a way to modify as.function.polynomial()
to do what I want.


best wishes


rksh









On 14 Jan 2008, at 08:45, Antonio, Fabio Di Narzo wrote:

> 2008/1/14, Robin Hankin <[EMAIL PROTECTED]>:
>> Hi
>>
>> [this after some considerable thought as to R-help vs R-devel]
>>
>>
>>
>> I want to write a (S3) method for as.function();
>> toy example follows.
>>
>> Given a matrix "a", I need to evaluate trace(ax) as a function of
>> (matrix) "x".
>>
>> Here's a trace function:
>>
>> tr <-  function (a)  {
>> i <- seq_len(nrow(a))
>> return(sum(a[cbind(i, i)]))
>> }
>>
>>
>> How do I accomplish the following:
>>
>>
>> a <- crossprod(matrix(rnorm(12),ncol=3))
>> class(a) <- "foo"
>>
>> f <- as.function(a)   # need help to write as.function.foo()
>> x <- diag(3)
>>
>> f(x) #should give tr(ax)
>
> What about the following?
>
> as.function.foo <- function(a, ...)
>  function(x)
>sum(diag(a*x))
>
> However, I don't see the need for an S3 method. Why don't simply use  
> (?):
> mulTraceFun <- function(a)
>  function(x)
>   sum(diag(a*x))
>
> So you also have a more meaningful name than an anonymous  
> 'as.function'.
>
> HTH,
> Antonio.
>
>>
>> a <- 4
>> f(x)   # should still give tr(ax) even though "a" has been
>> reassigned.
>
> This would'nt work with my proposal, because of lexical scoping.
>
>>
>>
>>
>>
>>
>> [my real example is very much more complicated than this but
>> I need this toy one too and I can't see how to modify
>> as.function.polynomial()
>> to do what I want]
>>
>>
>>
>>
>> --
>> Robin Hankin
>> Uncertainty Analyst and Neutral Theorist,
>> National Oceanography Centre, Southampton
>> European Way, Southampton SO14 3ZH, UK
>>  tel  023-8059-7743
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>
> -- 
> Antonio, Fabio Di Narzo
> Ph.D. student at
> Department of Statistical Sciences
> University of Bologna, Italy

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] as.function()

2008-01-14 Thread Robin Hankin

On 14 Jan 2008, at 10:57, Prof Brian Ripley wrote:

> On Mon, 14 Jan 2008, Henrique Dallazuanna wrote:
>
>> Try this:
>>
>> as.function.foo <- function(obj, ...)
>> {
>> newobj <- function(x, ...){}
>> body(newobj) <- obj
>> return(newobj)
>> }
>>
>> x <- expression(2*x + 3*x^2)
>>
>> foo <- as.function.foo(x)
>> foo(2)
>
> Well, that copies what as.function.polynomial did but that was  
> written for S3 well before R was started.  Here you can use  
> environments:
>
> as.function.foo <- function(obj, ...) function(x, ...) eval(obj)
>


Yes, "did" is the operative word here.The new  
as.function.polynomial() is considerably slicker
and more general.

But both old and new versions 'unpick' the polynomial "x" into its  
elements
and create a function, line by line, that depends on the elements of  
"x".

The new version uses:

as.function.polynomial <- function (x, ...)
{

<< clever and efficient creation of list "ex"  as a function of vector  
"x" snipped>>

 f <- function(x) NULL
 body(f) <- ex
 f
}


The old version uses:


as.function.polynomial <- function (x, ...)
{

<< clever and efficient creation of character string "jj"  as a  
function of vector "x" snipped>>

f <- function(x) NULL
 body(f) <- parse(text = jj )[[1]]
f
}



If f <- as.function.foo(x),  somehow the "f" object has to include  
within itself
the entirety of "x".   In my case, "x" is [of course] an arbitrary- 
dimensional
array of possibly complex elements.

So I can't use Bill/Kurt's method (at least not easily)  because my
object is considerably more complicated than a vector.
And I don't have an example  that works on a complicated object
to copy.


>
>>
>>
>> Hope this help
>>
>> On 14/01/2008, Robin Hankin <[EMAIL PROTECTED]> wrote:
>>> Antonio
>>>
>>>
>>> thanks for your help here, but it doesn't answer my question.
>>>
>>> Perhaps if I outline my motivation it would help.
>>>
>>>
>>> I want to recreate the ability of
>>> the "polynom" package to do the following:
>>>
>>>
>>> > library(polynom)
>>> > p <- polynomial(1:4)
>>> > p
>>> 1 + 2*x + 3*x^2 + 4*x^3
>>> > MySpecialFunction <- as.function(p)
>>> > MySpecialFunction(1:10)
>>>  [1]   10   49  142  313  586  985 1534 2257 3178 4321
>>> > p <- 4
>>> > MySpecialFunction(1:10)
>>>  [1]   10   49  142  313  586  985 1534 2257 3178 4321
>>> >
>>>
>>>
>>> See how the user can define object "MySpecialFunction",
>>>  which outlives short-lived polynomial "p".
>>>
>>> Unfortunately, I don't see a way to modify as.function.polynomial()
>>> to do what I want.
>>>
>>>
>>> best wishes
>>>
>>>
>>> rksh
>>>

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] as.function()

2008-01-14 Thread Robin Hankin

On 14 Jan 2008, at 11:50, Duncan Murdoch wrote:

> Robin Hankin wrote:
>> Hi
>>

[snip]

>> a <- crossprod(matrix(rnorm(12),ncol=3))
>> class(a) <- "foo"
>>
>> f <- as.function(a)   # need help to write as.function.foo()
>> x <- diag(3)
>>
>> f(x) #should give tr(ax)
>>
>> a <- 4
>> f(x)   # should still give tr(ax) even though "a" has been   
>> reassigned.
>>
>>
> Brian's answer was what you want.  A less general version is this:
>
> > as.function.foo <- function(x, ...) {
> +function(b) tr(x %*% b)
> + }
>


Wow.  Got it!  Looks like I'll have to read the R Language Definition  
again.

Thanks everyone.




> (I switched the names of the args, because the first arg to  
> as.function.foo should match the name of the first arg to  
> as.function).
>
> I was a little surprised that this worked even if a was changed  
> without ever evaluating f, because I thought lazy evaluation would  
> mess up that case.  But of course the value of x is forced when R  
> evaluates it to find out the class for dispatch to the method.
>
> Duncan Murdoch

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] a != a*1 != a+0 != +a

2008-02-04 Thread Robin Hankin
hits=1.0 tests=MANY_EXCLAMATIONS
X-USF-Spam-Flag: NO

Hi

I am writing a package for multivariate polynomials ('multipols')
using S3 methods.

The package includes a Ops.multipol()  function for the
arithmetic methods;  I would like
to define some sort of user-specified Boolean option which, if
set,  would force results to be simplified as they are produced.

Call this option "trim".  Trimming a multipol results in
a smaller array that is more manageable.

Mostly one wants to trim, sometimes not.


Would options() be a good way to manage this?

One issue is the behaviour of unary operators "+" and "-".

If trim is TRUE, then  "a"   is one thing,  but "+a"  returns
"trim(a)", which might be different.

Also "1*a" would be different from "a" and "a+0"



Does the List consider this to be Good Practice?

Has anyone got comments?



--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Re Bessel functions of complex argument

2008-03-10 Thread Robin Hankin

On 10 Mar 2008, at 11:03, Prof Brian Ripley wrote:

> On Mon, 10 Mar 2008, Martin Maechler wrote:
>
>> {Diverted from an R-help thread}
>>
>>>>>>> "Robin" == Robin Hankin <[EMAIL PROTECTED]>
>>>>>>>on Mon, 10 Mar 2008 08:49:06 + writes:
>>
>>   Robin> Hello Baptiste Bessel functions with complex
>>   Robin> arguments are not supported in R.
>>
>> Hi Robin,
>> have you looked at *how* pari/gp does this?
>> If they have a C routine, it might be worth considering to add
>> ``the'' complex Bessel functionality to R.
>
> To an R package, surely?  They hardly seem important enough for base  
> R.
>
> There are C implementations in several places, e.g. ACM algorithm  
> 644 (also netlib.org/amos).
>


Thanks for this.  I'll have a look at netlib.  Perhaps a wrapper for  
netlib,
  along the lines of  the gsl package, would be possible.  I also need
the gamma function with complex  arguments and I suspect
that this is included.

PARI/GP  is a large multi-developer effort, and provides an
environment not unlike R but geared towards pure mathematics rather
than statistics.  It includes arbitrary-precision arithmetic and many
pure maths constructs such as number theory and p-adic metrics.

(I must confess to being not terribly familiar with the
Bessel suite; I only  really know about PARI/GP's  elliptic  
functions).  There
is no easy way to access the  code  for any specific functionality
in isolation, AFAICS.

One of my "longer-term" plans is to write a package that includes
a wrapper for GP (PARI is the front end, a command line wrapper) but
this is not entirely straightforward. Implementing a useful wrapper  
proved
to be rather tricky.  I expected that  the methodology would
approximate the gsl package but this was not the case:
many of the features of PARI/GP are alien to the precepts and
tenets of R.   My attempts stalled a couple of years ago after
I finished the elliptic package which included a fit-for-purpose, but
very very basic, wrapper for two or three functions.

Notwithstanding this, I think that an R wrapper for PARI/GP would
be very desirable.   Comments anyone?





--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 slot with NA default

2008-03-26 Thread Robin Hankin
Hi

How do I specify an S4 class with a slot that is potentially numeric,  
but NA
by default?  I   want the slot to be NA until I calculate its value
  (an expensive operation, not needed for all applications).   When  
its value is
known, I  will create a new object with the correct value inserted in  
the slot.

I want "NA" to signify "not known".

My attempt fails because NA is not numeric:

 >  
setClass 
("foo",representation=representation(x="numeric"),prototype=list(x=NA))
Error in makePrototypeFromClassDef(properties, ClassDef, immediate,  
where) :
   in making the prototype for class "foo" elements of the prototype  
failed to match the corresponding slot class: x (class “numeric” )
 >

(the real application has other slots too).   I can
use "NaN", which is numeric:

 >  
setClass 
("foo",representation=representation(x="numeric"),prototype=list(x=NaN))
[1] "foo"
 >

But this is not the correct sense: to me "NaN" means "not a number"  
and I want
the sense to be "not available".



Any advice?





--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] tests Rin and Rout

2008-03-31 Thread Robin Hankin

On 31 Mar 2008, at 09:16, Martin Maechler wrote:

>>>>>> "CG" == Christophe Genolini <[EMAIL PROTECTED]>
>>>>>>on Mon, 31 Mar 2008 00:31:55 +0200 writes:
>
>>>
>>> Generally I find it's good to look at examples that work.
>>> For examples of packages using tests, look at source
>>> packages on CRAN.  Run the tests on them (using R CMD
>>> check), and see what gets produced.
>>>
>CG> Do you have the name of a package that use it ? I try
>CG> the 10 first package, and 10 other at random, but none
>CG> of them use tests...
>
> hmm, I see 219 out 1378 CRAN packages having a 'tests'
> subdirectory, so it seems you have been a bit unlucky. ;-)



How unlucky exactly?


 > fisher.test(matrix(c(0,20,219,1159),2,2))

Fisher's Exact Test for Count Data

data:  matrix(c(0, 20, 219, 1159), 2, 2)
p-value = 0.05867
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
  0.00 1.082225
sample estimates:
odds ratio
  0

 >


*just* shy of the magic 5% . . .







--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] "[<-" plus drop-type extra argument

2008-04-02 Thread Robin Hankin
Hello

I am writing a replacement method for an S4 class and want to pass
an additional argument to  "[<-"() along the lines of  "["()'s  "drop"  
argument.

Specifically, I have an S4  class, call it "foo", with a slot  'x'  
that is a
vector and a slot  'NC' that  is a scalar.

I want to be able to pass a Boolean argument to the replacement
method which specifies whether or not to recalculate  NC (which
is time-consuming and often not needed).  I want the default behaviour
to be "don't recalculate NC".

Toy example follows, in which 'NC' is the sum of x (in my application,
calculating NC is an expensive multidimensional integral).


setClass("foo",
  representation = representation(x="numeric" , NC="numeric"),
  prototype  = list(x=double() , NC=NA_real_)
  )

setReplaceMethod("[",signature(x="foo"),
  function(x,i,j,recalculate=FALSE,value){
jj <- [EMAIL PROTECTED]
jj[i] <- value
if(recalculate){
  return(new("foo" , x=jj , NC=sum(jj)))
} else {
  return(new("foo" , x=jj , NC=NA_real_))
}
  }
  )




Then



 >
 > a <- new("foo", x=1:10,NC=45)


 > a[4,recalculate=FALSE] <- 1
 > a
An object of class “foo”
Slot "x":
  [1] 1 2 3 1 5 6 7 8 910

Slot "NC":
[1] NA

#  Desired behaviour: NC not recalculated


 >
 > a[4,recalculate=TRUE] <- 1
 > a
An object of class “foo”
Slot "x":
  [1] 1 2 3 1 5 6 7 8 910

Slot "NC":
[1] 10051


# Desired behaviour: NC recalculated

 >
 > a[4] <- 1
Error in .local(x, i, j, ..., value) :
   argument "value" is missing, with no default
 >

# Undesired behaviour:  I wanted 'recalculate' to take its default  
value of FALSE, and 'NC' not be recalculated.


How to do this?





--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] prod(0, 1:1000) ; 0 * Inf etc

2008-04-22 Thread Robin Hankin
Interesting problem this.

My take on it would be that the "true" value depends
on how fast the "0" approaches 0 and how fast the "n"
approaches infinity.

Consider

f1 <- function(n){prod(1/n , seq_len(n))}
f2 <- function(n){prod(1/factorial(n) , seq_len(n))}
f3 <- function(n){prod(n^(-n) , seq_len(n))}

All these are equal to prod( "small number" , 1:"big number")

but applying these functions to an increasing sequence gives different
behaviour:

 > sapply(c(10,100,1000),f1)
[1]  3.628800e+05 9.332622e+155   Inf
 > sapply(c(10,100,1000),f2)
[1]   1   1 NaN
 > sapply(c(10,100,1000),f3)
[1] 3.628800e-04 9.332622e-43  NaN
 >


f1() tends to infinity, f2() tends to 1, and f3() tends to zero.

Figuring out the appropriate limit in cases like this is a job
for a symbolic system.

I would say the original behaviour is desirable.


rksh



On 22 Apr 2008, at 02:43, Bill Dunlap wrote:

> On Mon, 21 Apr 2008, Mathieu Ribatet wrote:
>
>> I definitely do agree with you.
>> Basically, I see two different ways to proceed:
>>
>>   1. one could first check if there are any 0 in the vector and then
>>  return 0 without computing the product
>
> That would fail for prod(0,Inf), which should return the same
> thing as 0*Inf=NaN.  Similarly for prod(0,NA) and prod(0,NaN).
> Scanning for all these things might well be slower than just
> doing the multiplications.  Scanning also means that 0 is treated
> more commutatively than other numbers.
>
>>   2. or convert prod(x1, x2, x3) in prod(c(x1, x2, x3))
>
> c() can convert values of classy objects in undesirable ways.
> E.g.,
>> now<-Sys.time()
>> min(now-file.info(".")$mtime, now-file.info("..")$mtime)
>   Time difference of 3787.759 secs
>> min(c(now-file.info(".")$mtime, now-file.info("..")$mtime))
>   [1] 1.052155
>
> This may be considered a bug in c(), at least for class
> "timediff" (and  "factor" and "ordered"), but c() removes
> attributes.
>
>> Martin Maechler a écrit :
>>> I think most of us would expect  prod(0:1000)  to return 0, and ...
>>>
>>>
>>> ... it does.
>>>
>>> However, many of us also expect
>>>  prod(x1, x2)to be equivalent to
>>>  prod(c(x1,x2))
>>> the same as we can expect that for min(), max(), sum() and such
>>> members of the "Summary" group.
>>>
>>> Consequently, prod(0, 1:1000) should also return 0,
>>> but as you see, it gives  NaN  which may be a bit puzzling...
>>> The explanation is relatively simple:
>>>
>>> 1) The internal implementation uses
>>>
>>> prod(x1, x2) := prod(x1) * prod(x2)
>>>
>>>   which in this case is
>>>
>>> 2)  0 * Infand that is not 0, but NaN;
>>>
>>>  not necessarily because we would want that, but I think just
>>>  because the underlying C math library does so.
>>>
>>>
>>> I would personally like to change both behaviors,
>>> but am currently only proposing to change  prod() such as to
>>> return 0 in such cases.
>>> This would be S-plus compatible, in case that matters.
>>>
>>> Opinions?
>>>
>>> Martin Maechler, ETH Zurich & R-core
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>
>> --
>> Institute of Mathematics
>> Ecole Polytechnique Fédérale de Lausanne
>> STAT-IMA-FSB-EPFL, Station 8
>> CH-1015 Lausanne   Switzerland
>> http://stat.epfl.ch/
>> Tel: + 41 (0)21 693 7907
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
> 
> Bill Dunlap
> Insightful Corporation
> bill at insightful dot com
> 360-428-8146
>
> "All statements in this message represent the opinions of the author  
> and do
> not necessarily reflect Insightful Corporation policy or position."
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] optional setValidity()

2008-05-07 Thread Robin Hankin

Hi


Suppose I have an S4 class "foo" and a validity checking
function  ".checkfoo()":

setClass("foo",  representation=representation("numeric"))
setValidity("foo" , .checkfoo)

is fine; in my application, .checkfoo() verifies that a bunch
of necessary conditions are met.

But .checkfoo() is very time consuming and I want
to give users the option of switching it off.

Most foo objects that one deals with fall into two or three standard  
types

and in these cases one doesn't need to execute  .checkfoo()
because one can show algebraically that the conditions are
automatically met.

But OTOH, I want the check to be performed "by default" to
stop anyone (me) from being too clever and defining
a non-standard foo object that doesn't meet .checkfoo().

What is best practice here?

Are there any examples I could copy?





--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
 tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] \S4method in combination with "[" and "[<-"

2008-05-21 Thread Robin Hankin

Hello Matthias

I too struggled with this for a long long time.

I'm not sure if this answers your question, but
Brobdingnag_1.1-2.tar.gz is clean under R-2.7.0,
and this package includes S4 methods for extract/replace.

Extract.Rd in the package doesn't use \S4method; also, I
couldn't figure out how to include a "usage" section
without R CMD check throwing a wobbly.

Extract.Rd is not ideal, but
seems to work in practice: the user types ?"[.brob"
and gets some support, but it would have been better
to have an explicit usage section too.


best wishes


rksh





On 21 May 2008, at 09:23, Matthias Kohl wrote:


Dear developers,

We want to use "\S4method" to document new S4-methods for "[" and  
"[<-". We use this for other functions/methods and it works without  
any problem, but in case of "[" and "[<-" we didn't manage to bring  
this to work.


The problem occurs in the development version of our package  
"distrSim" which can be found under http://r-forge.r-project.org/R/?group_id=87 
.


The warning we obtain is

Bad \usage lines found in documentation object 'Subsetting-methods':
\S4method{[}{SeqDataFrames}(x, i, j, k, drop = FALSE)
\S4method{[<-}{SeqDataFrames}(x, i, j, k, value)

Of course, we tried several different possibilities but with no  
success.


Does someone know a package which shows a use case for this  
situation? I looked in several packages but could not found any.


Thanks for your help!
Matthias

--
Dr. Matthias Kohl
www.stamats.de

______
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


--
Robin Hankin
Uncertainty Analyst and Neutral Theorist,
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
 tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] array indexing

2008-12-05 Thread Robin Hankin

Hi.

I have been pondering array indexing via matrices.

> a <- array(1:27,rep(3,3))
>  index <- matrix(c(1,1,1,1,2,3),2,3,byrow=TRUE)
> a[index]
[1]  1 22


as expected and documented.  But what was the thinking
behind the decision to access the array by rows rather
than columns?

The 'index' matrix is ordered as [1,1,1,2,1,3] and so
the extraction is  a[index[c(1,3,5)]] for the first element
and a[index[c(2,3,6)]] for the second. 


If the indexing was by columns then we would have


> a <- array(1:27,rep(3,3))
>  index <- matrix(c(1,1,1,1,2,3),2,3)
> a[index]
[1]  1 22


Thus the extraction is a[index[1:3]] and a[index[4:6]].

This seems to be a sensible way of going about array indexing.
For example, even *defining* the 'index'
matrix is simpler.

So, what is the thinking behind the  behaviour as implemented?

I'm asking because I want to understand the thinking behind the
decision.


rksh

--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] \description in Rd files

2009-01-05 Thread Robin Hankin

Prof Brian Ripley wrote:

I think you meant \describe 

On Mon, 5 Jan 2009, Robin Hankin wrote:


Hi

The aylmer package generates a warning (under R-2.9.0) for an Rd file 
which I think is OK.

The package is clean under R-2.8.1.


Did you actually look at the help under 2.8.1: it is I am sure not 
what you intended?  Oops 





Ouch.   aylmer.dvi  looks fine under  R-2.8.1 but indeed the text 
versions are not as intended.


I guess that the machinery for text documentation is less tolerant of  
inappropriate LaTeX-isms.


All works fine with \describe in place of \description.

thanks again

rksh




--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] \description in Rd files

2009-01-05 Thread Robin Hankin

Hi

I make a point of going through my packages every so often and perusing
the check results on CRAN.

The aylmer package generates a warning (under R-2.9.0) for an Rd file 
which I think is OK.

The package is clean under R-2.8.1.

Specifically, the warning is:

   * checking Rd files ... OK
   * checking Rd files against version 2 parser ... *WARNING*
 *** error on file ./man/icons.Rd
 Error in parse_Rd("./man/icons.Rd", encoding = "unknown") :
 ./man/icons.Rd: 16:3: unexpected section header at
 15: :
 16: \description

 problem found in ‘icons.Rd’
 The Rdversion 2 parser is experimental, but almost all reports are
 correct.



And the relevant file lines in icons.Rd are:

14 The six icons were used in this study were:
15 \description{
16 \item[PB] polar bears, which face extinction through loss of ice
17 floe hunting grounds
18 \item[NB] the Norfolk Broads, which flood due to intense rainfall
19 events
20 \item[LF] London flooding, as a result of sea level rise
21 \item[THC] the Thermo-haline circulation, which may slow or stop as
22 a result of anthropogenic modification of the hydrological cycle
23 \item[OA] oceanic acidification as a result of anthropogenic emissions
24 of carbon dioxide
25 \item[WAIS] the West Antarctic Ice Sheet, which is calving into the sea
26 as a result of climate change
27 }



[this using nl]






--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] vignette compilation times

2009-02-19 Thread Robin Hankin

Dear All

I am preparing a number of vignettes that require a very  long time to
process with Sweave.  The longest one takes 10 hours.  I love the weaver
package!

Is a package that includes such a computationally intensive vignette
acceptable on CRAN?  Are there any guidelines here? 




--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] vignette compilation times

2009-02-19 Thread Robin Hankin

thanks for this clarification Uwe

Could I include the r_env_cache/  directory in the package
and then assume that the CRAN checks use

Sweave( , driver=weaver())

in which case the process takes about 10 seconds?

rksh





Uwe Ligges wrote:



Robin Hankin wrote:

Dear All

I am preparing a number of vignettes that require a very  long time to
process with Sweave.  The longest one takes 10 hours.  I love the weaver
package!

Is a package that includes such a computationally intensive vignette
acceptable on CRAN?  Are there any guidelines here?


Robin,

please try to keep a package below 5 minutes check time. You know, 
it's hard to perform daily checks for 1700 packages if just one 
already runs for 10 hours.
In reasonable circumstances, we do have exclude lists, but it probably 
does not make sense to provide that computational intensive vignettes.
Perhaps you can provide some intermediate results in form of Rdata 
files  in order to reduce runtime of your vignette's creation (and 
checks!).


Best wishes,
Uwe



--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] vignettes in a bundle

2009-04-28 Thread Robin Hankin

Hi

I have a bundle comprising three packages.

Each package has a vignette.  Currently each
vignette has a separate .bib file.

How do I arrange the bundle so that each
vignette accesses a single, common, .bib file?


thanks

Robin

--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] vignettes in a bundle

2009-04-28 Thread Robin Hankin

Hello Romain

this is brilliant; it never occurred to me to use cat() in this way.

It works but I don't know why.

With:

<>=
bib <- system.file( "doc", "bayesian.bib", package = "emulator" )
cat( "\\bibliography{",bib,"}\n",sep='')
@

in the Rnw file, the TeX file looks like this:

\begin{Schunk}
\begin{Soutput}
\bibliography{/usr/local/lib/R/library/emulator/doc/bayesian.bib}
\end{Soutput}
\end{Schunk}


So, my question is: why does TeX parse the  middle line? why isn't this
line interpreted as regular Soutput?

best wishes and thanks again

Robin



Romain Francois wrote:

Hi Robin,

Something like:

<>=
bib <- system.file( "bib", "mybib.bib", package = "yada" )
cat( "\\bibliography{",bib,"}\n")
@

It would also be nice to be able to use bibliography in Rd files ...

Romain

Robin Hankin wrote:

Hi

I have a bundle comprising three packages.

Each package has a vignette.  Currently each
vignette has a separate .bib file.

How do I arrange the bundle so that each
vignette accesses a single, common, .bib file?


thanks

Robin







--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] vignettes in a bundle

2009-04-28 Thread Robin Hankin

Romain Francois wrote:

Would this work better:

<>=
bib <- system.file( "doc", "bayesian.bib", package = "emulator" )
cat( "\\bibliography{",bib,"}\n",sep='')
@

Romain


Yes, this gives cleaner TeX output (I keep on forgetting about Sweave's
'result' code chunk option).

But I still don't understand why your first suggestion worked.
How does TeX 'know' that this is not to be included verbatim?

best wishes and thanks again

Robin



Robin Hankin wrote:

Hello Romain

this is brilliant; it never occurred to me to use cat() in this way.

It works but I don't know why.

With:

<>=
bib <- system.file( "doc", "bayesian.bib", package = "emulator" )
cat( "\\bibliography{",bib,"}\n",sep='')
@

in the Rnw file, the TeX file looks like this:

\begin{Schunk}
\begin{Soutput}
\bibliography{/usr/local/lib/R/library/emulator/doc/bayesian.bib}
\end{Soutput}
\end{Schunk}


So, my question is: why does TeX parse the  middle line? why isn't this
line interpreted as regular Soutput?

best wishes and thanks again

Robin



Romain Francois wrote:

Hi Robin,

Something like:

<>=
bib <- system.file( "bib", "mybib.bib", package = "yada" )
cat( "\\bibliography{",bib,"}\n")
@

It would also be nice to be able to use bibliography in Rd files ...

Romain

Robin Hankin wrote:

Hi

I have a bundle comprising three packages.

Each package has a vignette.  Currently each
vignette has a separate .bib file.

How do I arrange the bundle so that each
vignette accesses a single, common, .bib file?


thanks

Robin













--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] parsing Rd files and \deqn{}

2009-05-01 Thread Robin Hankin

Hi

[R-2.9.0]

I am having difficulty including a LaTeX formula in an Rd
file.

The example given in section 2.7 in 'Parsing Rd files' is:


   \deqn{ f(x) = \left\{
 \begin{array}{ll}
 0 & x<0 \\
 1 & x\ge 0
 \end{array}
 \right. }{non latex}


For me, this gives:

\deqn{ f(x) = \left\{
\begin{array}{ll}
0 \& x<0 \bsl{}
1 \& x\ge 0
\end{array}
\right. }{}

in the tex file, which is not  desired because the ampersand
is escaped; the '&' symbol appears in the dvi file, and I
want an ampersand  to indicate  alignment.

Also, the '\\' appears as \bsl{}, which is undesired; the
resulting dvi file (made by R CMD Rd2dvi) looks wrong.

How do I write the Rd file so as to produce non-escaped
ampersands?


--
Robin K. S. Hankin
Uncertainty Analyst
University of Cambridge
19 Silver Street
Cambridge CB3 9EP
01223-764877

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R-latest.tar.gz make error

2006-04-13 Thread Robin Hankin
Hi.


(MacOSX 10.4.6) I downloaded R-latest.tar.gz  just now from src/base- 
prerelease
on CRAN.  "make" gave the following error after an apparently successful
./configure:


[snip]
util.c: In function 'Rf_type2char':
util.c:247: warning: return discards qualifiers from pointer target type
gcc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -no-cpp-precomp -I. -I../../src/include -I../../src/include -I/ 
usr/local/include -DHAVE_CONFIG_H  -fPIC -fno-common  -g -O2 - 
std=gnu99 -c version.c -o version.o
gcc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -no-cpp-precomp -I. -I../../src/include -I../../src/include -I/ 
usr/local/include -DHAVE_CONFIG_H  -fPIC -fno-common  -g -O2 - 
std=gnu99 -c vfonts.c -o vfonts.o
g77  -fPIC -fno-common  -g -O2 -c xxxpr.f -o xxxpr.o
gcc -dynamiclib -L/usr/local/lib -dynamiclib -install_name libR.dylib  
-compatibility_version 2.3.0  -current_version 2.3.0  - 
headerpad_max_install_names -o libR.dylib Rembedded.o CConverters.o  
CommandLineArgs.o Rdynload.o Renviron.o RNG.o apply.o arithmetic.o  
apse.o array.o attrib.o base.o bind.o builtin.o character.o coerce.o  
colors.o complex.o connections.o context.o cov.o cum.o dcf.o  
datetime.o debug.o deparse.o deriv.o dotcode.o dounzip.o dstruct.o  
duplicate.o engine.o envir.o errors.o eval.o format.o fourier.o  
gevents.o gram.o gram-ex.o graphics.o identical.o internet.o  
iosupport.o lapack.o list.o localecharset.o logic.o main.o mapply.o  
match.o memory.o model.o names.o objects.o optim.o optimize.o  
options.o par.o paste.o pcre.o platform.o plot.o plot3d.o plotmath.o  
print.o printarray.o printvector.o printutils.o qsort.o random.o  
regex.o registration.o relop.o rlocale.o saveload.o scan.o seq.o  
serialize.o size.o sort.o source.o split.o sprintf.o startup.o  
subassign.o subscript.o subset.o summary.o sysutils.o unique.o util.o  
version.o vfonts.o xxxpr.o   `ls ../appl/*.o ../nmath/*.o ../unix/ 
*.o  2>/dev/null` -framework vecLib -L/usr/local/lib/gcc/powerpc- 
apple-darwin7.9.0/3.4.4 -lg2c -lgcc_s -lSystem  ../extra/zlib/ 
libz.a ../extra/bzip2/libbz2.a ../extra/pcre/libpcre.a  ../extra/intl/ 
libintl.a  -Wl,-framework -Wl,CoreFoundation -lreadline  -lm -liconv
ld: warning multiple definitions of symbol _xerbla_
print.o definition of _xerbla_ in section (__TEXT,__text)
/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/ 
vecLib.framework/Versions/A/libBLAS.dylib(single module) definition  
of _xerbla_
ld: Undefined symbols:
restFP
saveFP
/usr/bin/libtool: internal link edit command failed
make[3]: *** [libR.dylib] Error 1
make[2]: *** [R] Error 2
make[1]: *** [R] Error 1
make: *** [R] Error 1
238-250:~/scratch/R-beta%




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] unsigned long long integers

2006-04-24 Thread Robin Hankin
Hi.

R-exts, section 1.7, discusses the passing of long long integers  
between R and C.
I want to use unsigned long long integers, but
I only need them inside a C function.

I have a  function that maps the nonnegative integers to the positive  
integers.
The function is defined by a delicate recursive algorithm that is exact
for integer arithmetic, but wildly incorrect for doubles.

The function increases rapidly with its argument, and ordinary integers
are not enough to illustrate my point (in a paper I am writing).


The C function is as follows:



void numbparts_longint(int *n, double *p){/* p(1)...p(n) calculated */
 int i,s,f,r;
 unsigned long long int *ip;
 unsigned long long int pp[*n];

/* COMPLICATED RECURSIVE ALGORITHM IN WHICH PP IS FILLED SNIPPED */

 for(i=0 ; i < *n ; i++){
 p[i] = (double) pp[i];
 }
}



This compiles fine with "gcc -Wall" (and illustrates my point!)
but R CMD check reports


   partitions.c:180: warning: ISO C90 does not support 'long long'
   partitions.c:181: warning: ISO C90 does not support 'long long'


I really want long long integers here.  What are my options?
[the same happens with signed long long integers]





--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R CMD check problem

2006-05-22 Thread Robin Hankin
Hi

I have a package that I'm testing.
It seems to install fine and it works, as far as I can tell.
For example, I can install the package, and use it,
  and source the test suite with no errors.

My problem is with R CMD check.

It passes on R-2.2-0:

Robin-Hankins-Computer:~/scratch% R CMD check ./partitions_1.1-0.tar.gz
* checking for working latex ... OK

[snip]

make[1]: Leaving directory `/users/sat/rksh/partitions.Rcheck/tests'
OK
* creating partitions-manual.tex ... OK
* checking partitions-manual.tex ... OK




But it doesn't pass on R-2.3.0, MacOSX:



Robin-Hankins-Computer:~/scratch% R CMD check ./partitions_1.1-0.tar.gz
* checking for working latex ... OK
*[snip]

* checking examples ... OK
* checking tests ...
   Running 'aaa.R'
make[1]: *** [aaa.Rout] Error 1
make: *** [all] Error 2
ERROR

checking  aaa.Rout.fail, I get

Robin-Hankins-Computer:~/scratch% cat ./partitions.Rcheck/tests/ 
aaa.Rout.fail
/Library/Frameworks/R.framework/Resources/bin/R: line 192: /Library/ 
Frameworks/R.framework/Resources/bin/exec/i386/R: cannot execute  
binary file
/Library/Frameworks/R.framework/Resources/bin/R: line 192: /Library/ 
Frameworks/R.framework/Resources/bin/exec/i386/R: Unknown error: 0
Robin-Hankins-Computer:~/scratch%



anyone got any insight into this?




--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Idempotent apply

2006-06-09 Thread Robin Hankin
Hi Hadley, Rteam,

I encountered similar issues trying to reproduce Matlab's "sort"
functionality.

[here  "a" is array(1:27, c(2,3,4))]

In matlab, one can type sort(a,i) to sort along dimension i.

One can reproduce this functionality thus:

asort <- function(a,i,FUN="sort"){
   j <- 1:length(dim(a))
   aperm(apply(a,j[-i],FUN),append(j[-1],1,i-1))
}


then  things like

asort(a,1)
asort(a,2)
asort(a,1,sample)

work as expected (at least by this recovering Matlab user)



best wishes



rksh





On 9 Jun 2006, at 08:50, hadley wickham wrote:

> Dear all,
>
> I have been working on an idempotent version of apply, such that
> applying a function f(x) = x (ie. force) returns the same array (or a
> permutation of it depending on the order of the margins):
>
> a <- array(1:27, c(2,3,4))
>
> all.equal(a, iapply(a, 1, force))
> all.equal(a, iapply(a, 1:2, force))
> all.equal(a, iapply(a, 1:3, force))
> all.equal(aperm(a, c(2,1,3)), iapply(a, 2, force))
> all.equal(aperm(a, c(3,1,2)), iapply(a, 3, force))
>
> I have also generalised apply so that the function can return an array
> of any dimension and it will be slotted in orthogonally to the
> existing dimensions:
>
> iapply(a, 1, min)
> iapply(a, 2, min)
> iapply(a, 3, min)
> iapply(a, 1, range)
> iapply(a, 2, range)
> iapply(a, 3, range)
>
> I have included my code below.  I'd be interested to get your  
> feedback on:
>
>  * whether you can find an edge case where it doesn't work
>
>  * how I can make the subsetting code more elegant - the current
> kludgework of do.call seems to be suboptimal, but I couldn't figure
> out a better way
>
>  * I was also suprised that a & b display differently in this example:
>
> a <- b <- as.array(1:3)
> dim(b) <- c(3,1)
>
> Any other comments are very much appreciated!
>
> Hadley
>
>
> iapply <- function(x, margins=1, fun, ..., REDUCE=TRUE) {
>   if (!is.array(x)) x <- as.array(x)
>   
>   reorder <- c(margins, (1:length(dim(x)))[-margins])
>
>   x <- aperm(x, (reorder))
>   x <- compactify(x, length(margins))
>
>   results <- lapply(x, fun, ...)
>   dim(results) <- dim(x)
>   
>   results <- decompactify(results)
>   if (REDUCE) reduce(results) else results
> }
> vdim <- function(x) if (is.vector(x)) length(x) else dim(x)
>
>
> # Compacts an array into a smaller array of lists containing the
> remaining dimensions
> compactify <- function(x, dims = 1) {
>
>   d <- dim(x)
>   ds <- seq(along=d)
>   margins <- 1:dims
>   
>   subsets <- do.call(expand.grid, structure(lapply(d[margins], seq,
> from=1), names=margins))
>   subsets[, ds[-margins]] <- TRUE
>   
>   res <- lapply(1:nrow(subsets), function(i) do.call("[",c(list(x),
> subsets[i, ], drop=TRUE)))
>   dim(res) <- dim(x)[margins]
>   
>   res
> }
>
> # Inverse of compactity
> decompactify <- function(x) {
>
>   subsets <- do.call(expand.grid, structure(lapply(dim(x), seq,  
> from=1)))
>   subsets <- cbind(subsets, x=matrix(TRUE, ncol=length(vdim(x[[1]])),
> nrow=nrow(subsets)))
>
>   y <- array(NA, c(vdim(x), vdim(x[[1]])))
>   for(i in 1:length(x)) {
>   y <- do.call("[<-",c(list(y), unname(subsets[i, ]), value = 
> list(x 
> [[i]])))
>   }
>   y
> } 
> reduce <- function(x) {
>   do.call("[", c(list(x), lapply(dim(x), function(x) if (x==1) 1 
> else
> T), drop=TRUE))
> }
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [R] combining tables

2006-06-20 Thread Robin Hankin
On 19 Jun 2006, at 12:45, Gabor Grothendieck wrote:
> Try this:
>
> both <- c(x,y)
> as.table(tapply(both, names(both), sum))


thanks for this, Gabor.

The class of the objects I am manipulating in my
package is c("count", "table").

It'd be nice to overload the "+" symbol so that I can add
two count objects like this with a simple "x+y".  To this end, I
defined a Ops.count() function that executed Gabor's summation.

But, this broke all sorts of functionality in the package,
because the ">" relation was not defined in my Ops.count()
function.

In my case, the only operation that I want to redefine is "+".
I want to leave all the others unchanged.

What is Best Practice for redefining just one binary operator?



>
> On 6/19/06, Robin Hankin <[EMAIL PROTECTED]> wrote:
>> Hi
>>
>> Suppose I have two tables of counts of different animals and I
>> wish to pool them so the total of the sum is the sum of the total.
>>
>> Toy example follows (the real ones are ~10^3 species and ~10^6
>> individuals).
>> x and y are zoos that have merged and I need a combined inventory
>> including animals that are temporarily absent.
>>
>>
>>  > (x <- as.table(c(cats=3,squid=7,pigs=2,dogs=1,slugs=0)))
>> cats squid  pigs  dogs slugs
>> 3 7 2 1 0
>>  > (y <- as.table(c(cats=4,dogs=5,slugs=3,crabs=0)))
>> cats  dogs slugs crabs
>> 4 5 3 0
>>  > (desired.output <- as.table(c
>> (cats=7,squid=7,pigs=2,dogs=6,slugs=3,crabs=0)))
>> cats squid  pigs  dogs slugs crabs
>> 7 7 2 6 3 0
>>  >
>>
>>
>>
>>
>> Note that we have 7 cats altogether, and the crabs correctly show
>> as zero counts.
>>
>>
>> How to do this nicely in R?
>>
>>
>>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] invalid alignment error in R-2.4.0

2006-07-11 Thread Robin Hankin
Hi

I  thought I'd  pass one of my package through R CMD check with R-2.4.0
with the following result (it's clean under R-2.3.1):


[snip]
 >
 >   # histogram of the fourth power:
 > hist(out^4, col="gray")

*** caught bus error ***
address 0x12, cause 'invalid alignment'

Traceback:
1: sort(x, partial = c(half, half + 1))
2: sum(sort(x, partial = c(half, half + 1))[c(half, half + 1)])
3: stats::median(diff(breaks))
4: hist.default(out^4, col = "gray")
5: hist(out^4, col = "gray")
aborting ...



Under R-2.4.0, hist(out^4) repeatably gives an error like that above.
Here's a dput() of out:


c(10.4975870476354, 10.3973239490546, 10.9079973318563,  
10.9201457586087,
10.863164987001, 10.8092412328219, 10.3740979640666, 10.3933170416021,
10.1571361926693, 10.7231637475341, 10.8495903667896, 10.2760101321262,
10.3999724625721, 11.1422484374362, 10.1623400428855, 10.9139189812841,
11.1313700266654, 10.4214929867460, 10.9543767973144, 10.2925796047365,
10.3399040002101, 10.5080265067013, 10.4963598344302, 10.5694912655817,
10.9088365086950, 9.67007136377566, 10.4303159857457, 10.6734035266469,
10.3555432530979, 10.9738495753501, 10.350313651, 11.2210598170116,
10.8020906590915, 10.7391185468963, 10.3303267171864, 10.7176410493307,
10.3527634000890, 10.6331145125840, 10.7946862157461, 10.6147608946858,
9.85567630738787, 11.0289144282434, 10.742857648964, 10.866630627911,
10.5278318354308, 10.3553983376990, 10.7900270843436, 10.3467961125517,
10.5126782499258, 10.8575135939962, 10.9151746119094, 11.2000951011802,
10.4133108985045, 10.5265186993107, 10.7148111540688, 10.3722159808052,
10.1911424590529, 10.8375326158672, 10.2892046453081, 11.0159788575821,
10.2104834661186, 10.0718751926059, 11.5503607473136, 10.9134877529340,
11.3063246702428, 10.0682022386836, 10.6766007351429, 10.6029531885996,
10.3568338147980, 10.5246512104442, 10.9964827564484, 10.4826791470128,
10.334098026, 10.4201862775486, 10.6526293411458, 10.1270181743699,
10.7479561453406, 10.3223366380115, 10.6640317023258, 10.8816465650639,
10.2469734194448, 11.0595077832844, 10.6211764829084, 10.8387020014927,
10.3842712860829, 10.3288969420998, 11.1095936345021, 10.7755741380517,
10.8891163113089, 10.9239878986268, 10.4674437486482, 10.4494516106226,
10.6816375084280, 10.1609470064992, 10.6055689487767, 10.3759153410817,
10.4743618410399, 10.9932886540585, 10.2563007403496, 10.0821264920858,
10.7293259154111, 10.8834112318584, 10.5285102045021, 10.7068278466484,
10.9517121917501, 10.6249671128484, 10.8188751147001, 10.5327448695580,
10.7315642237059, 10.4996799637132)


 > R.version
_
platform   powerpc-apple-darwin8.7.0
arch   powerpc
os darwin8.7.0
system powerpc, darwin8.7.0
status Under development (unstable)
major  2
minor  4.0
year   2006
month  07
day09
svn rev38523
language   R
version.string R version 2.4.0 Under development (unstable)  
(2006-07-09 r38523)
 >

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] invalid alignment error in R-2.4.0

2006-07-11 Thread Robin Hankin
08, rho=0x1d13830) at eval.c: 
1427
#28 0x01099c04 in Rf_eval (e=0x1d143b0, rho=0x1d13830) at eval.c:435
#29 0x0109bc60 in do_begin (call=0x1d14378, op=0x180a2cc,  
args=0x1d14394, rho=0x1d13830) at eval.c:1101
#30 0x01099bb4 in Rf_eval (e=0x1d14378, rho=0x1d13830) at eval.c:425
#31 0x01099bb4 in Rf_eval (e=0x1d140f4, rho=0x1d13830) at eval.c:425
#32 0x0109bc60 in do_begin (call=0x1d14654, op=0x180a2cc,  
args=0x1d140d8, rho=0x1d13830) at eval.c:1101
#33 0x01099bb4 in Rf_eval (e=0x1d14654, rho=0x1d13830) at eval.c:425
#34 0x0109d0ac in Rf_applyClosure (call=0x1fa96bc, op=0x1d15a7c,  
arglist=0x1d137dc, rho=0x1d39024, suppliedenv=0x181d304) at eval.c:615
#35 0x01099d94 in Rf_eval (e=0x1fa96bc, rho=0x1d39024) at eval.c:456
#36 0x0109ae7c in Rf_evalList (el=0x1fa96a0, rho=0x1d39024) at eval.c: 
1427
#37 0x01099c04 in Rf_eval (e=0x1fa9668, rho=0x1d39024) at eval.c:435
#38 0x0109ba10 in do_set (call=0x1fa95c0, op=0x180a390,  
args=0x1fa95dc, rho=0x1d39024) at eval.c:1337
#39 0x01099bb4 in Rf_eval (e=0x1fa95c0, rho=0x1d39024) at eval.c:425
#40 0x0109bc60 in do_begin (call=0x1facdc0, op=0x180a2cc,  
args=0x1fa95a4, rho=0x1d39024) at eval.c:1101
#41 0x01099bb4 in Rf_eval (e=0x1facdc0, rho=0x1d39024) at eval.c:425
#42 0x0109d0ac in Rf_applyClosure (call=0x1d3a2c4, op=0x1fad6f4,  
arglist=0x1d3a040, rho=0x1d3a094, suppliedenv=0x1d3a21c) at eval.c:615
#43 0x010cc7e0 in Rf_usemethod (generic=0x1f69540 "hist",  
obj=0x1fad6f4, call=0x2, args=0x4, rho=0x1d3a094, callrho=0x181d2e8,  
defrho=0x2aa59f8, ans=0xbfffe148) at objects.c:311
#44 0x010ccb20 in do_usemethod (call=0x1d3af24, op=0x1,  
args=0x1d3af40, env=0x1d3a094) at objects.c:395
#45 0x01099bb4 in Rf_eval (e=0x1d3af24, rho=0x1d3a094) at eval.c:425
#46 0x0109d0ac in Rf_applyClosure (call=0x1d3b724, op=0x1d3ab88,  
arglist=0x1d3a040, rho=0x181d2e8, suppliedenv=0x181d304) at eval.c:615
#47 0x01099d94 in Rf_eval (e=0x1d3b724, rho=0x181d2e8) at eval.c:456
#48 0x010b7d30 in Rf_ReplIteration (rho=0x181d2e8, savestack=0,  
browselevel=0, state=0xbfffed88) at main.c:254
#49 0x010b7e88 in R_ReplConsole (rho=0x181d2e8, savestack=0,  
browselevel=0) at main.c:302
#50 0x010b8198 in run_Rmainloop () at main.c:913
#51 0x2cec in main (ac=42293336, av=0x1) at Rmain.c:33
(gdb)


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] invalid alignment error in R-2.4.0

2006-07-11 Thread Robin Hankin
d3ee00, op=0x1d3f0a0,  
arglist=0x1d3ca00, rho=0x181d2e8, suppliedenv=0x181d304) at eval.c:615
#35 0x01099d94 in Rf_eval (e=0x1d3ee00, rho=0x181d2e8) at eval.c:456
#36 0x010b7d30 in Rf_ReplIteration (rho=0x181d2e8, savestack=0,  
browselevel=0, state=0xbfffed88) at main.c:254
#37 0x010b7e88 in R_ReplConsole (rho=0x181d2e8, savestack=0,  
browselevel=0) at main.c:302
#38 0x010b8198 in run_Rmainloop () at main.c:913
#39 0x2cec in main (ac=42540144, av=0x1) at Rmain.c:33
(gdb)





On 11 Jul 2006, at 10:54, Prof Brian Ripley wrote:

> On Tue, 11 Jul 2006, Martin Maechler wrote:
>
>> Hi Robin,
>>
>> thanks for the extra info.  I have no clue what the problem
>> might be.
>>
>> But from R-level debugging (and the R traceback you wrote
>> originally),
>> I assume you could trigger the problem already by a simple
>>
>>   median(rep(1000, 10))
>>
>> is that the case? If yes, please follow up on R-devel.
>
> I don't see why you assume so: there are multitudinous paths through
> qsort.
>
>> In any case, let's wait for others (Mac specialists and/or other
>> R-core members) to voice ideas.
>
> I checked this under valgrind, which normally shows up such errors,  
> and
> found nothing, even with gctorture on (and valgrind instrumentation of
> R's memory management on).
>
> You used the dput and Robin used load(), so presumably on not  
> exactly the
> same object.  I think Robin needs to test what he actually gave us,  
> to be
> sure.  If that is the case, I suspect the compiler.
>
>>
>>

--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] pari/gp interface

2006-07-25 Thread Robin Hankin
Hi

I'm developing an R package that
needs to execute some code written in pari/gp.

I've used this before from an R package (elliptic) but the interface  
is very
basic: the R function creates a string such as the following:

string <- echo ' ellwp ([ 2+0*I , 0+2*I ], 1+0*I )' | gp -q

And then

system(string)

returns the output from gp which then needs to be text processed  
(translating "I"
to "i", etc).

I don't think this approach would work under Windows.

Does anyone have any experience of calling pari/gp from R?

Or any ideas for a more portable method than the one above?




[
PARI/GP is a widely used computer algebra system designed for fast  
computations
in number theory.  It is freely available at

  http://pari.math.u-bordeaux.fr/
]


--
Robin Hankin
Uncertainty Analyst
National Oceanography Centre, Southampton
European Way, Southampton SO14 3ZH, UK
  tel  023-8059-7743

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


  1   2   >