[Rd] Creating an environment with attributes in a package

2010-07-16 Thread Jon Clayden
Dear all,

I am trying to create an environment object with additional attributes, viz.

Foo <- structure(new.env(), name="Foo")

Doing this in a standard session works fine: I get the environment
with attr(,"name") set as expected. But if the same code appears
inside a package source file, I get just the plain environment with no
attributes set. Using a non-environment object works as I would expect
within the package (i.e. the attributes remain).

I've looked through the documentation for reasons for this, and the
only thing I've found is the mention in the language definition that
"assigning attributes to an environment can lead to surprises". I'm
not sure if this is one of the surprises that the author(s) had in
mind! Could someone tell me whether this is expected, please?

All the best,
Jon

--
Jonathan D Clayden, PhD
Lecturer in Neuroimaging and Biophysics
Radiology and Physics Unit
UCL Institute of Child Health
30 Guilford Street
LONDON  WC1N 1EH
United Kingdom

t | +44 (0)20 7905 2708
f | +44 (0)20 7905 2358
w | www.homepages.ucl.ac.uk/~sejjjd2/
w | www.diffusion-mri.org.uk/people/1

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Creating an environment with attributes in a package

2010-07-16 Thread Jon Clayden
On 16 July 2010 13:32, Hadley Wickham  wrote:
> On Fri, Jul 16, 2010 at 2:08 PM, Jon Clayden  wrote:
>> Dear all,
>>
>> I am trying to create an environment object with additional attributes, viz.
>>
>> Foo <- structure(new.env(), name="Foo")
>>
>> Doing this in a standard session works fine: I get the environment
>> with attr(,"name") set as expected. But if the same code appears
>> inside a package source file, I get just the plain environment with no
>> attributes set. Using a non-environment object works as I would expect
>> within the package (i.e. the attributes remain).
>>
>> I've looked through the documentation for reasons for this, and the
>> only thing I've found is the mention in the language definition that
>> "assigning attributes to an environment can lead to surprises". I'm
>> not sure if this is one of the surprises that the author(s) had in
>> mind! Could someone tell me whether this is expected, please?
>
> You'll be much less surprised if you do:
>
> Foo <- structure(list(new.env()), name="Foo")
>
> Attributes on reference objects are also passed by reference, and
> surprises will result.

Ah, it's good to know the core reason for the "surprises"! Sounds like
the best thing is to refactor things as you suggest.

Many thanks for all the responses.

Regards,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Reference classes

2010-10-22 Thread Jon Clayden
Dear all,

First, many thanks to John Chambers, and anyone else who was involved,
for the new support for "reference classes" in R 2.12.0. It's nice to
see this kind of functionality appear in a relatively R-like form, and
with the blessing of the core team. In some contexts it is undoubtedly
appealing to associate a set of methods with a class directly, rather
than defining a load of generics which are only ever likely to be
implemented for a single class, or some other such arrangement. (I was
actually in the process of writing a package which did something
similar to the reference class idea, although it is less fully
realised.)

My initial experiences with this functionality have been very
positive. I've stumbled over one minor issue so far: default values of
method parameters are not displayed by the help() method, viz.

> library(methods)
> Regex <- setRefClass("Regex", fields="string", methods=list(
+ isMatch = function (text, ignoreCase = FALSE)
+ {
+ 'Returns a logical vector of the same length as "text",
indicating whether or not each element is a match to the regular
expression.'
+ grepl(string, text, ignore.case=ignoreCase, perl=TRUE)
+ }
+ ))
> Regex$help("isMatch")
Call: $isMatch(text, ignoreCase = )

Returns a logical vector of the same length as "text", indicating
whether or not each element is a match to the regular expression.

It seems unlikely that this is intentional, but correct me if I'm
wrong. It still seems to happen with the latest R-patched (Mac OS X
10.5.8).

As a suggestion, it would be nice if the accessors() method could be
used to create just "getters" or just "setters" for particular fields,
although I realise this can be worked around by removing the unwanted
methods afterwards.

More generally, I was wondering how firm the commitment is to
providing this kind of programming mechanism. I know it's likely to
change in some ways in future releases, but can I use it in packages,
trusting that only a few tweaks will be needed for compatibility with
future versions of R, or is it possible that the whole infrastructure
will be removed in future?

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reference classes

2010-10-22 Thread Jon Clayden
On 22 October 2010 18:55, John Chambers  wrote:

>> As a suggestion, it would be nice if the accessors() method could be
>> used to create just "getters" or just "setters" for particular fields,
>> although I realise this can be worked around by removing the unwanted
>> methods afterwards.
>
> In other words, read-only fields.  There is a facility for that implemented
> already, but it didn't yet make it into the documentation, and it could use
> some more testing.  The generator object has a $lock() method that inserts a
> write-once type of method for one or more fields.  Example:
>
>> fg <- setRefClass("foo", list(bar = "numeric", flag = "character"),
> +             methods = list(
> +             addToBar = function(incr) {
> +                 b = bar + incr
> +                 bar <<- b
> +                 b
> +             }
> +             ))
>> fg$lock("bar")
>> ff = new("foo", bar = 1.5)
>> ff$bar <- 2
> Error in function (value)  : Field "bar" is read-only
>
> A revision will document this soon.
>
> (And no, the workaround is not to remove methods.  To customize access to a
> field, the technique is to write an accessor function for the field that, in
> this case, throws an error if it gets an argument.  See the documentation
> for the fields argument.  The convention here and the underlying mechanism
> are taken from active bindings for environments.)

OK, yes - I see. This is clearly much less superficial than removing
the setter method for a field which can be directly set anyway. I'll
have to try out field accessor functions and get a feel for the
semantics.

Many thanks for the rapid and very helpful response.

Regards,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reference classes

2010-10-26 Thread Jon Clayden
On 23 October 2010 00:52, Jon Clayden  wrote:
> On 22 October 2010 18:55, John Chambers  wrote:
>
>>> As a suggestion, it would be nice if the accessors() method could be
>>> used to create just "getters" or just "setters" for particular fields,
>>> although I realise this can be worked around by removing the unwanted
>>> methods afterwards.
>>
>> In other words, read-only fields.  There is a facility for that implemented
>> already, but it didn't yet make it into the documentation, and it could use
>> some more testing.  The generator object has a $lock() method that inserts a
>> write-once type of method for one or more fields.  Example:
>>
>>> fg <- setRefClass("foo", list(bar = "numeric", flag = "character"),
>> +             methods = list(
>> +             addToBar = function(incr) {
>> +                 b = bar + incr
>> +                 bar <<- b
>> +                 b
>> +             }
>> +             ))
>>> fg$lock("bar")
>>> ff = new("foo", bar = 1.5)
>>> ff$bar <- 2
>> Error in function (value)  : Field "bar" is read-only
>>
>> A revision will document this soon.
>>
>> (And no, the workaround is not to remove methods.  To customize access to a
>> field, the technique is to write an accessor function for the field that, in
>> this case, throws an error if it gets an argument.  See the documentation
>> for the fields argument.  The convention here and the underlying mechanism
>> are taken from active bindings for environments.)
>
> OK, yes - I see. This is clearly much less superficial than removing
> the setter method for a field which can be directly set anyway. I'll
> have to try out field accessor functions and get a feel for the
> semantics.

Unfortunately, I'm having difficulty working out the accessor function
approach. I've looked in the Rcpp package for examples, but it doesn't
seem to use this feature. If I define

Foo <- setRefClass("Foo", fields=list(bar=function (newBar) {
if (missing(newBar)) bar
else stop("bar is read-only") }),
  methods=list(barExists=function ()
print(exists("bar"

then I can't access the value of "bar" due to infinite recursion.
Using ".self$bar" in the accessor produces the same effect.

> f <- Foo$new()
> f$barExists()
[1] TRUE
> f$bar
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
> f$bar()
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?

I can guess why this is happening (accessing "bar" within the accessor
calls itself), but how can I get at the value of "bar" within the
accessor without this occurring?

The other problem is that I can't even set a value at the time of
creation of the object, viz.

> f <- Foo$new(bar=2)
Error in function (newBar)  : bar is read-only

Is there a way to test whether "bar" has already been set in the
accessor, so that I can allow it to be set once? (I know lock() allows
this, but it would be useful to be able to replicate the effect using
accessors, so that it can be generalised further where needed.)
Clearly, exists("bar") doesn't do this, as seen above -- presumably
because it sees the method rather than the field, or there is some
default value.

Thanks in advance,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reference Classes: Generalizing Reference Class Generator objects?

2010-10-28 Thread Jon Clayden
Hi Daniel,

I think you want to define an "initialize" method, as in

TestClass <- setRefClass ("TestClass",
   fields = list (text = "character"),
   methods = list (
   initialize = function (text) { object <-
initFields(text=paste(text,"\n")) },
   print = function ()  { cat(text) } )
)

This seems to work as you intend:

> x <- TestClass$new("test")
> x$print()
test


All the best,
Jon

On 28 October 2010 15:13, Daniel Lee  wrote:
> Is it possible to override the $new(...) in the reference class generator? I
> have tried adding a "new" method to the methods of the class, but that is
> obviously not correct. I have also tried adding it to the class generator,
> but the class generator still uses the default constructor.
>
> As a simple example, this is the current interface:
> TestClass <- setRefClass ("TestClass",
>        fields = list (text = "character"),
>        methods = list (
>                print = function ()  {cat(text)})
> )
> test <- TestClass$new (text="Hello World")
> test$print()
>
> I would like to override $new(...) to be something like (add a "\n" to the
> end of the input, no need to specify input fields):
> TestClass$methods (new = function (text) {
>            text <- paste (text, "\n")
>            methods:::new (def, text=text)
>        })
>
> The constructor would then be:
> test <- TestClass$new ("Hello World")
>
> This is a subtle, but useful change. I have also tried adding to TestClass a
> method $newInstance(text), but that was not successful. If this is not
> possible, could we consider augmenting the Reference Class interface to
> include constructors?
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reference Classes: Generalizing Reference Class Generator objects?

2010-10-28 Thread Jon Clayden
Sorry - you don't need to assign the value of initFields(). I was
going to do it in two lines but then realised one was enough... :)

TestClass <- setRefClass ("TestClass",
   fields = list (text = "character"),
   methods = list (
   initialize = function (text) {
initFields(text=paste(text,"\n")) },
   print = function ()  { cat(text) } )
)

All the best,
Jon


On 28 October 2010 15:13, Daniel Lee  wrote:
> Is it possible to override the $new(...) in the reference class generator? I
> have tried adding a "new" method to the methods of the class, but that is
> obviously not correct. I have also tried adding it to the class generator,
> but the class generator still uses the default constructor.
>
> As a simple example, this is the current interface:
> TestClass <- setRefClass ("TestClass",
>        fields = list (text = "character"),
>        methods = list (
>                print = function ()  {cat(text)})
> )
> test <- TestClass$new (text="Hello World")
> test$print()
>
> I would like to override $new(...) to be something like (add a "\n" to the
> end of the input, no need to specify input fields):
> TestClass$methods (new = function (text) {
>            text <- paste (text, "\n")
>            methods:::new (def, text=text)
>        })
>
> The constructor would then be:
> test <- TestClass$new ("Hello World")
>
> This is a subtle, but useful change. I have also tried adding to TestClass a
> method $newInstance(text), but that was not successful. If this is not
> possible, could we consider augmenting the Reference Class interface to
> include constructors?
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reference Classes: Generalizing Reference Class Generator objects?

2010-10-28 Thread Jon Clayden
?ReferenceClasses says "Reference methods can not themselves be
generic functions; if you want additional function-based method
dispatch, write a separate generic function and call that from the
method". So I think you'd need to take that approach in your
"initialize" method.

Hope this helps,
Jon


On 28 October 2010 18:25, Daniel Lee  wrote:
> Thank you. Your example really clarifies what the $initialize(...) function
> is supposed to do.
>
> Do you know if there is a straightforward way to dispatch the $new(...)
> method based on the signature of the arguments? I am thinking along the
> lines of S4 methods with valid signatures.
>
> Thanks again for the example.
>
>
> On 10/28/2010 12:12 PM, Jon Clayden wrote:
>>
>> Sorry - you don't need to assign the value of initFields(). I was
>> going to do it in two lines but then realised one was enough... :)
>>
>> TestClass<- setRefClass ("TestClass",
>>        fields = list (text = "character"),
>>        methods = list (
>>                initialize = function (text) {
>> initFields(text=paste(text,"\n")) },
>>                print = function ()  { cat(text) } )
>> )
>>
>> All the best,
>> Jon
>>
>>
>> On 28 October 2010 15:13, Daniel Lee  wrote:
>>>
>>> Is it possible to override the $new(...) in the reference class
>>> generator? I
>>> have tried adding a "new" method to the methods of the class, but that is
>>> obviously not correct. I have also tried adding it to the class
>>> generator,
>>> but the class generator still uses the default constructor.
>>>
>>> As a simple example, this is the current interface:
>>> TestClass<- setRefClass ("TestClass",
>>>        fields = list (text = "character"),
>>>        methods = list (
>>>                print = function ()  {cat(text)})
>>> )
>>> test<- TestClass$new (text="Hello World")
>>> test$print()
>>>
>>> I would like to override $new(...) to be something like (add a "\n" to
>>> the
>>> end of the input, no need to specify input fields):
>>> TestClass$methods (new = function (text) {
>>>            text<- paste (text, "\n")
>>>            methods:::new (def, text=text)
>>>        })
>>>
>>> The constructor would then be:
>>> test<- TestClass$new ("Hello World")
>>>
>>> This is a subtle, but useful change. I have also tried adding to
>>> TestClass a
>>> method $newInstance(text), but that was not successful. If this is not
>>> possible, could we consider augmenting the Reference Class interface to
>>> include constructors?
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Reference classes and ".requireCachedGenerics"

2011-02-15 Thread Jon Clayden
Dear all,

If I load a package which creates reference classes whilst another
such package is also loaded, I get a warning about masking of the
".requireCachedGenerics" variable. (FWIW, both packages are
lazy-loaded.) Googling this variable name turned up only one previous
discussion, which didn't immediately help, except to suggest that it
may be related to my defining an S3 method for one or more of the
classes. It also pointed me at bits of the R source, but it wasn't
obvious to me from that, what this variable is for.

Aside from being a nuisance, I wonder if this is indicative of a
problem on R's side or on mine, so I'd be glad for any clarification.

This is R 2.12.1 on Mac OS X.6.6, though it still happens with the new
2.12.2 beta. Any feedback welcome.

Thanks,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Ignoring .Rprofile when installing a package

2011-02-16 Thread Jon Clayden
Dear all,

Is there a way to force R CMD INSTALL to ignore ~/.Rprofile and
similar? I presume it sources these startup files for a reason, but
I've found that it can cause confusion or problems. In particular, my
~/.Rprofile loads a few packages which I very frequently use, but this
stops me from installing new versions of their dependencies; viz.

$ R CMD INSTALL tractor.base
* installing to library ‘/Library/Frameworks/R.framework/Resources/library’
* installing *source* package ‘tractor.base’ ...
** R
** data
** preparing package for lazy loading
Error: package ‘tractor.base’ is required by ‘tractor.opt’ so will not
be detached
* removing ‘/Library/Frameworks/R.framework/Resources/library/tractor.base’
* restoring previous
‘/Library/Frameworks/R.framework/Resources/library/tractor.base’

I've tried R --vanilla CMD INSTALL, but that seems to have no effect.
This is R 2.12.1 on Mac OS X.6.6. Any pointers appreciated.

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Ignoring .Rprofile when installing a package

2011-02-18 Thread Jon Clayden
I would also be interested in knowing what the rationale is for this.

Moreover, it seems that the "standard" (and documented) approach to
this of calling "options(defaultPackages=c(...))" in ~/.Rprofile does
not get ignored when installing. The environment variable approach may
work, but it seems to me that this requires some (educated) guesswork.
Could R CMD INSTALL not ignore the default packages option?

Regards,
Jon


On 16 February 2011 17:25, Brian G. Peterson  wrote:
> On 02/16/2011 10:57 AM, Prof Brian Ripley wrote:
>>
>> The most obvious answer is not to do that. You have not used the
>> standard mechanism to to do that (which should work here as R CMD
>> INSTALL overrides that one). It's all in ?Startup (look for
>> R_DEFAULT_PACKAGES).
>
> Note that R CMD INSTALL is not mentioned at all here.
>
>> The simplest way to ignore ~/.Rprofile is to set R_PROFILE_USER to
>> something else.
>
>>> I've tried R --vanilla CMD INSTALL, but that seems to have no effect.
>>
>> As documented.
>
> Then let's try this from another angle...
>
> Is there a rationale why --vanilla or --no-environ or --no-site-file or
> --no-init-file are *NOT* supported by R CMD INSTALL ?  I don't see any
> reasoning for the inconsistency in the docs anywhere.
>
> If not, would R-core entertain a patch that would handle these options?
>
> This functionality is troublesome in a production installations where we
> *want* our users to have specific packages and environment options set all
> the time, and I need to edit the Rprofile.site file every time I upgrade one
> of these 'production' packages.
>
> Regards,
>
>   - Brian
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Reading 64-bit integers

2011-03-29 Thread Jon Clayden
Dear all,

I see from some previous threads that support for 64-bit integers in R
may be an aim for future versions, but in the meantime I'm wondering
whether it is possible to read in integers of greater than 32 bits at
all. Judging from ?readBin, it should be possible to read 8-byte
integers to some degree, but it is clearly limited in practice by R's
internally 32-bit integer type:

> x <- as.raw(c(0,0,0,0,1,0,0,0))
> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
[1] 16777216
> x <- as.raw(c(0,0,0,1,0,0,0,0))
> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
[1] 0

For values that fit into 32 bits it works fine, but for larger values
it fails. (I'm a bit surprised by the zero - should the value not be
NA if it is out of range?) The value can be represented as a double,
though:

> 4294967296
[1] 4294967296

I wouldn't expect readBin() to return a double if an integer was
requested, but is there any way to get the correct value out of it? I
suppose one could read the bytes into a raw vector and then
reconstruct the number manually from that, but is there a more elegant
or built-in solution that I'm not aware of?

This is R 2.12.1 on Mac OS X.6.7 - .Machine$sizeof.long is 8.

Many thanks,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reading 64-bit integers

2011-03-29 Thread Jon Clayden
Dear Simon,

Thank you for the response.

On 29 March 2011 15:06, Simon Urbanek  wrote:
>
> On Mar 29, 2011, at 8:46 AM, Jon Clayden wrote:
>
>> Dear all,
>>
>> I see from some previous threads that support for 64-bit integers in R
>> may be an aim for future versions, but in the meantime I'm wondering
>> whether it is possible to read in integers of greater than 32 bits at
>> all. Judging from ?readBin, it should be possible to read 8-byte
>> integers to some degree, but it is clearly limited in practice by R's
>> internally 32-bit integer type:
>>
>>> x <- as.raw(c(0,0,0,0,1,0,0,0))
>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>> [1] 16777216
>>> x <- as.raw(c(0,0,0,1,0,0,0,0))
>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>> [1] 0
>>
>> For values that fit into 32 bits it works fine, but for larger values
>> it fails. (I'm a bit surprised by the zero - should the value not be
>> NA if it is out of range?
>
> No, it's not out of range - int is only 4 bytes so only 4 first bytes 
> (respecting endianness order, hence LSB) are used.

The fact remains that I ask for the value of an 8-byte integer and
don't get it. Pretending that it's really only four bytes because of
the limits of R's integer type isn't all that helpful. Perhaps a
warning should be put out if the cast will affect the value of the
result? It looks like the relevant lines in src/main/connections.c are
3689-3697 in the current alpha:

#if SIZEOF_LONG == 8
case sizeof(long):
INTEGER(ans)[i] = (int)*((long *)buf);
break;
#elif SIZEOF_LONG_LONG == 8
case sizeof(_lli_t):
INTEGER(ans)[i] = (int)*((_lli_t *)buf);
break;
#endif

>> ) The value can be represented as a double,
>> though:
>>
>>> 4294967296
>> [1] 4294967296
>>
>> I wouldn't expect readBin() to return a double if an integer was
>> requested, but is there any way to get the correct value out of it?
>
> Trivially (for your unsigned big-endian case):
>
> y <- readBin(x, "integer", n=length(x)/4L, endian="big")
> y <- ifelse(y < 0, 2^32 + y, y)
> i <- seq(1,length(y),2)
> y <- y[i] * 2^32 + y[i + 1L]

Thanks for the code, but I'm not sure I would call that trivial,
especially if one needs to cater for little endian and signed cases as
well! This is what I meant by reconstructing the number manually...

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reading 64-bit integers

2011-03-29 Thread Jon Clayden
Dear Simon,

On 29 March 2011 22:40, Simon Urbanek  wrote:
> Jon,
>
> On Mar 29, 2011, at 1:33 PM, Jon Clayden wrote:
>
>> Dear Simon,
>>
>> Thank you for the response.
>>
>> On 29 March 2011 15:06, Simon Urbanek  wrote:
>>>
>>> On Mar 29, 2011, at 8:46 AM, Jon Clayden wrote:
>>>
>>>> Dear all,
>>>>
>>>> I see from some previous threads that support for 64-bit integers in R
>>>> may be an aim for future versions, but in the meantime I'm wondering
>>>> whether it is possible to read in integers of greater than 32 bits at
>>>> all. Judging from ?readBin, it should be possible to read 8-byte
>>>> integers to some degree, but it is clearly limited in practice by R's
>>>> internally 32-bit integer type:
>>>>
>>>>> x <- as.raw(c(0,0,0,0,1,0,0,0))
>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>> [1] 16777216
>>>>> x <- as.raw(c(0,0,0,1,0,0,0,0))
>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>> [1] 0
>>>>
>>>> For values that fit into 32 bits it works fine, but for larger values
>>>> it fails. (I'm a bit surprised by the zero - should the value not be
>>>> NA if it is out of range?
>>>
>>> No, it's not out of range - int is only 4 bytes so only 4 first bytes 
>>> (respecting endianness order, hence LSB) are used.
>>
>> The fact remains that I ask for the value of an 8-byte integer and
>> don't get it.
>
> I think you're misinterpreting the documentation:
>
>     If ‘size’ is specified and not the natural size of the object,
>     each element of the vector is coerced to an appropriate type
>     before being written or as it is read.
>
> The "integer" object type is defined as signed 32-bit in R, so if you ask for 
> "8 bytes into object type integer", you get a coercion into that object type 
> -- 32-bit signed integer -- as documented. I think the issue may come from 
> the confusion of the object type "integer" with general "integer number" in 
> mathematical sense that has no representation restrictions. (FWIW in C the 
> "integer" type is "int" and it is 32-bit on all modern OSes regardless of 
> platform - that's where the limitation comes from, it's not something R has 
> made up).

OK, but it still seems like there is a case for raising a warning. As
it is there is no way to tell when reading an 8-byte integer from a
file whether its value is really 0, or if it merely has 0 in its
least-significant 4 bytes. If 99% of such stored numbers are below
2^31, one is going to need some extra logic to catch the other 1%
where you (silently) get the wrong value. In essence, unless you're
certain that you will never come across a number that actually uses
the upper 4 bytes, you will always have to read it as two 4-byte
numbers and check that the high-order one (which is endianness
dependent, of course) is zero. A C-level sanity check seems more
efficient and more helpful to me.

>> Pretending that it's really only four bytes because of
>> the limits of R's integer type isn't all that helpful. Perhaps a
>> warning should be put out if the cast will affect the value of the
>> result? It looks like the relevant lines in src/main/connections.c are
>> 3689-3697 in the current alpha:
>>
>> #if SIZEOF_LONG == 8
>>                   case sizeof(long):
>>                       INTEGER(ans)[i] = (int)*((long *)buf);
>>                       break;
>> #elif SIZEOF_LONG_LONG == 8
>>                   case sizeof(_lli_t):
>>                       INTEGER(ans)[i] = (int)*((_lli_t *)buf);
>>                       break;
>> #endif
>>
>>>> ) The value can be represented as a double,
>>>> though:
>>>>
>>>>> 4294967296
>>>> [1] 4294967296
>>>>
>>>> I wouldn't expect readBin() to return a double if an integer was
>>>> requested, but is there any way to get the correct value out of it?
>>>
>>> Trivially (for your unsigned big-endian case):
>>>
>>> y <- readBin(x, "integer", n=length(x)/4L, endian="big")
>>> y <- ifelse(y < 0, 2^32 + y, y)
>>> i <- seq(1,length(y),2)
>>> y <- y[i] * 2^32 + y[i + 1L]
>>
>> Thanks for the code, but I'm not sure I would call that trivial,
>> especially if one needs to cater for little endian and signed cases as
>> well!
>
> I was saying for your case and it's trivial as in read as integers, convert 
> to double precision and add.
>
>
>> This is what I meant by reconstructing the number manually...
>>
>
> You didn't say so - you were talking about reconstructing it from a raw 
> vector which seems a lot more painful since you can't compute with enough 
> precision on raw vectors.

True - I should have been more specific. Sorry.

Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reading 64-bit integers

2011-03-30 Thread Jon Clayden
On 30 March 2011 02:49, Simon Urbanek  wrote:
>
> On Mar 29, 2011, at 8:47 PM, Duncan Murdoch wrote:
>
>> On 29/03/2011 7:01 PM, Jon Clayden wrote:
>>> Dear Simon,
>>>
>>> On 29 March 2011 22:40, Simon Urbanek  wrote:
>>>> Jon,
>>>>
>>>> On Mar 29, 2011, at 1:33 PM, Jon Clayden wrote:
>>>>
>>>>> Dear Simon,
>>>>>
>>>>> Thank you for the response.
>>>>>
>>>>> On 29 March 2011 15:06, Simon Urbanek  wrote:
>>>>>>
>>>>>> On Mar 29, 2011, at 8:46 AM, Jon Clayden wrote:
>>>>>>
>>>>>>> Dear all,
>>>>>>>
>>>>>>> I see from some previous threads that support for 64-bit integers in R
>>>>>>> may be an aim for future versions, but in the meantime I'm wondering
>>>>>>> whether it is possible to read in integers of greater than 32 bits at
>>>>>>> all. Judging from ?readBin, it should be possible to read 8-byte
>>>>>>> integers to some degree, but it is clearly limited in practice by R's
>>>>>>> internally 32-bit integer type:
>>>>>>>
>>>>>>>> x<- as.raw(c(0,0,0,0,1,0,0,0))
>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>> [1] 16777216
>>>>>>>> x<- as.raw(c(0,0,0,1,0,0,0,0))
>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>> [1] 0
>>>>>>>
>>>>>>> For values that fit into 32 bits it works fine, but for larger values
>>>>>>> it fails. (I'm a bit surprised by the zero - should the value not be
>>>>>>> NA if it is out of range?
>>>>>>
>>>>>> No, it's not out of range - int is only 4 bytes so only 4 first bytes 
>>>>>> (respecting endianness order, hence LSB) are used.
>>>>>
>>>>> The fact remains that I ask for the value of an 8-byte integer and
>>>>> don't get it.
>>>>
>>>> I think you're misinterpreting the documentation:
>>>>
>>>>     If ‘size’ is specified and not the natural size of the object,
>>>>     each element of the vector is coerced to an appropriate type
>>>>     before being written or as it is read.
>>>>
>>>> The "integer" object type is defined as signed 32-bit in R, so if you ask 
>>>> for "8 bytes into object type integer", you get a coercion into that 
>>>> object type -- 32-bit signed integer -- as documented. I think the issue 
>>>> may come from the confusion of the object type "integer" with general 
>>>> "integer number" in mathematical sense that has no representation 
>>>> restrictions. (FWIW in C the "integer" type is "int" and it is 32-bit on 
>>>> all modern OSes regardless of platform - that's where the limitation comes 
>>>> from, it's not something R has made up).
>>>
>>> OK, but it still seems like there is a case for raising a warning. As
>>> it is there is no way to tell when reading an 8-byte integer from a
>>> file whether its value is really 0, or if it merely has 0 in its
>>> least-significant 4 bytes. If 99% of such stored numbers are below
>>> 2^31, one is going to need some extra logic to catch the other 1%
>>> where you (silently) get the wrong value. In essence, unless you're
>>> certain that you will never come across a number that actually uses
>>> the upper 4 bytes, you will always have to read it as two 4-byte
>>> numbers and check that the high-order one (which is endianness
>>> dependent, of course) is zero. A C-level sanity check seems more
>>> efficient and more helpful to me.
>>
>> Seems to me that the S-PLUS solution (output="double") would be a lot more 
>> useful.  I'd commit that if you write it; I don't think I'd commit the 
>> warning.
>>
>
> I was going to write some thing similar (idea = good, patch welcome ;)). My 
> only worry is that the "output" argument is a bit misleading in that one 
> could expect to use any combination of "input"/"output" which may be a 
> maintenance nightmare. If I understand it correctly it's only a special case 
> for integer input. I don't have S+ so can't say how they deal with that.

I don't have S+ either, but I agree that this is a better solution -
although, I would guess, more involved to implement. Depending on how
important compatibility with S+ is, I guess a more specific, logical,
"convert large integers to double" option would be clearer than
"output". I'm happy to try to draft a patch, but it may be a little
while before I have some time.

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Reading 64-bit integers

2011-03-30 Thread Jon Clayden
Draft patch attached. I haven't modified internal code before, so
there may be a mistake in how I handle the mechanics, but hopefully
this is a useful starting point. At any rate, the base package tests
still work and it seems to function as intended:

> x <- as.raw(c(0,0,0,1,0,0,0,0))
> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
[1] 0
> (readBin(x,"integer",n=1,size=8,signed=F,endian="big",double.out=T))
[1] 4294967296
> storage.mode(readBin(x,"integer",n=1,size=8,signed=F,endian="big",double.out=T))
[1] "double"

The "double.out" argument is ignored unless "what" is integer. As far
as I can tell there is no definition of unsigned long long akin to the
one for long long (at the top of connections.c), so I have not handled
the unsigned case for that type.

The diff is against the current beta, but I can provide a SVN diff
against the trunk if that is preferable.

All the best,
Jon


On 30 March 2011 02:49, Simon Urbanek  wrote:
>
> On Mar 29, 2011, at 8:47 PM, Duncan Murdoch wrote:
>
>> On 29/03/2011 7:01 PM, Jon Clayden wrote:
>>> Dear Simon,
>>>
>>> On 29 March 2011 22:40, Simon Urbanek  wrote:
>>>> Jon,
>>>>
>>>> On Mar 29, 2011, at 1:33 PM, Jon Clayden wrote:
>>>>
>>>>> Dear Simon,
>>>>>
>>>>> Thank you for the response.
>>>>>
>>>>> On 29 March 2011 15:06, Simon Urbanek  wrote:
>>>>>>
>>>>>> On Mar 29, 2011, at 8:46 AM, Jon Clayden wrote:
>>>>>>
>>>>>>> Dear all,
>>>>>>>
>>>>>>> I see from some previous threads that support for 64-bit integers in R
>>>>>>> may be an aim for future versions, but in the meantime I'm wondering
>>>>>>> whether it is possible to read in integers of greater than 32 bits at
>>>>>>> all. Judging from ?readBin, it should be possible to read 8-byte
>>>>>>> integers to some degree, but it is clearly limited in practice by R's
>>>>>>> internally 32-bit integer type:
>>>>>>>
>>>>>>>> x<- as.raw(c(0,0,0,0,1,0,0,0))
>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>> [1] 16777216
>>>>>>>> x<- as.raw(c(0,0,0,1,0,0,0,0))
>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>> [1] 0
>>>>>>>
>>>>>>> For values that fit into 32 bits it works fine, but for larger values
>>>>>>> it fails. (I'm a bit surprised by the zero - should the value not be
>>>>>>> NA if it is out of range?
>>>>>>
>>>>>> No, it's not out of range - int is only 4 bytes so only 4 first bytes 
>>>>>> (respecting endianness order, hence LSB) are used.
>>>>>
>>>>> The fact remains that I ask for the value of an 8-byte integer and
>>>>> don't get it.
>>>>
>>>> I think you're misinterpreting the documentation:
>>>>
>>>>     If ‘size’ is specified and not the natural size of the object,
>>>>     each element of the vector is coerced to an appropriate type
>>>>     before being written or as it is read.
>>>>
>>>> The "integer" object type is defined as signed 32-bit in R, so if you ask 
>>>> for "8 bytes into object type integer", you get a coercion into that 
>>>> object type -- 32-bit signed integer -- as documented. I think the issue 
>>>> may come from the confusion of the object type "integer" with general 
>>>> "integer number" in mathematical sense that has no representation 
>>>> restrictions. (FWIW in C the "integer" type is "int" and it is 32-bit on 
>>>> all modern OSes regardless of platform - that's where the limitation comes 
>>>> from, it's not something R has made up).
>>>
>>> OK, but it still seems like there is a case for raising a warning. As
>>> it is there is no way to tell when reading an 8-byte integer from a
>>> file whether its value is really 0, or if it merely has 0 in its
>>> least-significant 4 bytes. If 99% of such stored numbers are below
>>> 2^31, one is going to need some extra logic to catch the other 1%
>>> where you (silently) get 

Re: [Rd] Reading 64-bit integers

2011-04-13 Thread Jon Clayden
Simon (et al.),

I was just wondering if anything further came of this... I would be
willing to help put together an updated patch, if the semantics can be
decided upon.

All the best,
Jon


On 30 March 2011 19:22, Simon Urbanek  wrote:
> Bill,
>
> thanks. I like that idea of the output parameter better, especially if we 
> ever add different scalar vector types. Admittedly, what=integer() is the 
> most useful case. What I was worried about is things like what=double(), 
> output=integer() which could be legal, but are more conveniently dealt with 
> via as.integer(readBin()) instead.
> I won't have more time today, but I'll have a look tomorrow.
>
> Thanks,
> Simon
>
>
> On Mar 30, 2011, at 1:38 PM, William Dunlap wrote:
>
>>
>>> -Original Message-
>>> From: r-devel-boun...@r-project.org
>>> [mailto:r-devel-boun...@r-project.org] On Behalf Of Simon Urbanek
>>> Sent: Tuesday, March 29, 2011 6:49 PM
>>> To: Duncan Murdoch
>>> Cc: r-devel@r-project.org
>>> Subject: Re: [Rd] Reading 64-bit integers
>>>
>>>
>>> On Mar 29, 2011, at 8:47 PM, Duncan Murdoch wrote:
>>>
>>>> On 29/03/2011 7:01 PM, Jon Clayden wrote:
>>>>> Dear Simon,
>>>>>
>>>>> On 29 March 2011 22:40, Simon
>>> Urbanek  wrote:
>>>>>> Jon,
>>>>>>
>>>>>> On Mar 29, 2011, at 1:33 PM, Jon Clayden wrote:
>>>>>>
>>>>>>> Dear Simon,
>>>>>>>
>>>>>>> Thank you for the response.
>>>>>>>
>>>>>>> On 29 March 2011 15:06, Simon
>>> Urbanek  wrote:
>>>>>>>>
>>>>>>>> On Mar 29, 2011, at 8:46 AM, Jon Clayden wrote:
>>>>>>>>
>>>>>>>>> Dear all,
>>>>>>>>>
>>>>>>>>> I see from some previous threads that support for
>>> 64-bit integers in R
>>>>>>>>> may be an aim for future versions, but in the meantime
>>> I'm wondering
>>>>>>>>> whether it is possible to read in integers of greater
>>> than 32 bits at
>>>>>>>>> all. Judging from ?readBin, it should be possible to
>>> read 8-byte
>>>>>>>>> integers to some degree, but it is clearly limited in
>>> practice by R's
>>>>>>>>> internally 32-bit integer type:
>>>>>>>>>
>>>>>>>>>> x<- as.raw(c(0,0,0,0,1,0,0,0))
>>>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>>>> [1] 16777216
>>>>>>>>>> x<- as.raw(c(0,0,0,1,0,0,0,0))
>>>>>>>>>> (readBin(x,"integer",n=1,size=8,signed=F,endian="big"))
>>>>>>>>> [1] 0
>>>>>>>>>
>>>>>>>>> For values that fit into 32 bits it works fine, but
>>> for larger values
>>>>>>>>> it fails. (I'm a bit surprised by the zero - should
>>> the value not be
>>>>>>>>> NA if it is out of range?
>>>>>>>>
>>>>>>>> No, it's not out of range - int is only 4 bytes so only
>>> 4 first bytes (respecting endianness order, hence LSB) are used.
>>>>>>>
>>>>>>> The fact remains that I ask for the value of an 8-byte
>>> integer and
>>>>>>> don't get it.
>>>>>>
>>>>>> I think you're misinterpreting the documentation:
>>>>>>
>>>>>>    If 'size' is specified and not the natural size of the object,
>>>>>>    each element of the vector is coerced to an appropriate type
>>>>>>    before being written or as it is read.
>>>>>>
>>>>>> The "integer" object type is defined as signed 32-bit in
>>> R, so if you ask for "8 bytes into object type integer", you
>>> get a coercion into that object type -- 32-bit signed integer
>>> -- as documented. I think the issue may come from the
>>> confusion of the object type "integer" with general "integer
>>> number" in mathematical sense that has no representation
>>> restrictions. (FWIW in C the "integer" type is "int" and it
>>> is

[Rd] General "nil" reference class object

2011-05-04 Thread Jon Clayden
Dear John and others,

I've been wondering about whether there's any way to indicate a "nil"
reference class object, which will represent "no value", and be tested
for, but not fail the internal type checking. NULL is the obvious
choice (or seems so to me), but can only be used if an explicit class
union is created:

> Foo <- setRefClass("Foo")
> Bar <- setRefClass("Bar", fields=list(foo="Foo"))
> Bar$new(foo=NULL)
Error in as(value, "Foo") :
  no method or default for coercing "NULL" to "Foo"
> setClassUnion("FooOrNull", c("Foo","NULL"))
[1] "FooOrNull"
> Bar <- setRefClass("Bar", fields=list(foo="FooOrNull"))
> Bar$new(foo=NULL)
An object of class "Bar"

> is.null(Bar$new(foo=NULL)$foo)
[1] TRUE

Other languages allow things like "MyClass object = null", and it
seems to me that it would be helpful to have a value which will always
give TRUE for "is(object,)", but will
specifically indicate a nil reference. One possible ad-hoc solution is
to define the "empty" object of a base class to be "nil" (see below),
but it seems like it would be better to have a value specifically
designed for this purpose.

> nilObject <- Foo$new()
> is.nilObject <- function (x) identical(x,nilObject)
> Bar <- setRefClass("Bar", fields=list(foo="Foo"), methods=list(
+ initialize=function (foo=nilObject) { initFields(foo=foo) }))
> is.nilObject(Bar$new()$foo)
[1] TRUE

Is there already something like this that I'm not aware of? If not,
would it be possible and generally desirable to create it?

All the best,
Jon


--
Jonathan D Clayden, PhD
Lecturer in Neuroimaging and Biophysics
Imaging and Biophysics Unit
UCL Institute of Child Health
30 Guilford Street
LONDON  WC1N 1EH
United Kingdom

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] initFields() method no longer coerces arguments in R-devel

2011-08-05 Thread Jon Clayden
Dear all,

I've just had a package update bounced from CRAN because of a recent
change in R-devel which seems to affect the behaviour of the
initFields() reference class method. (The change must be very recent
because I tested the package on a week-old build of R-devel.) It seems
that the method no longer coerces its arguments to the expected type
of each field. For a simple example:

> Foo <- setRefClass("Foo", fields=list(number="integer"), 
> methods=list(initialize=function (number = NULL) initFields(number=number)))
> Foo$new()
Error in function (value)  :
  invalid replacement for field ‘number’, should be from class
“integer” or a subclass (was class “NULL”)

(This used to work, with "number" being set to "integer(0)"). In fact
it is now extremely strict, not even allowing a double literal which
is equal to an integer:

> Foo$new(number=1)
Error in function (value)  :
  invalid replacement for field ‘number’, should be from class
“integer” or a subclass (was class “numeric”)

I don't see anything about this in the NEWS, so I was wondering if I
could get clarification on whether this is now the intended behaviour,
before I further modify the package. I must say that this will be a
bit of a pain to "correct"...

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] initFields() method no longer coerces arguments in R-devel

2011-08-05 Thread Jon Clayden
OK, apologies - on both fronts I obviously searched on the wrong
terms. Sorry to waste your time.

Jon


On 5 August 2011 18:22, John Chambers  wrote:
> There is also an item in the NEWS file:
>
>    Field assignments in reference classes are now consistent with slots in
> S4 classes: the assigned value must come from the declared class (if any)
> for the field or from a subclass.
>
> On 8/5/11 7:24 AM, Simon Urbanek wrote:
>>
>> It's worth actually reading the list you post to ...
>>
>> http://r.789695.n4.nabble.com/Reference-classes-assignments-to-fields-td3708168.html
>>
>>
>> On Aug 5, 2011, at 6:41 AM, Jon Clayden wrote:
>>
>>> Dear all,
>>>
>>> I've just had a package update bounced from CRAN because of a recent
>>> change in R-devel which seems to affect the behaviour of the
>>> initFields() reference class method. (The change must be very recent
>>> because I tested the package on a week-old build of R-devel.) It seems
>>> that the method no longer coerces its arguments to the expected type
>>> of each field. For a simple example:
>>>
>>>> Foo<- setRefClass("Foo", fields=list(number="integer"),
>>>> methods=list(initialize=function (number = NULL) 
>>>> initFields(number=number)))
>>>> Foo$new()
>>>
>>> Error in function (value)  :
>>>  invalid replacement for field ‘number’, should be from class
>>> “integer” or a subclass (was class “NULL”)
>>>
>>> (This used to work, with "number" being set to "integer(0)"). In fact
>>> it is now extremely strict, not even allowing a double literal which
>>> is equal to an integer:
>>>
>>>> Foo$new(number=1)
>>>
>>> Error in function (value)  :
>>>  invalid replacement for field ‘number’, should be from class
>>> “integer” or a subclass (was class “numeric”)
>>>
>>> I don't see anything about this in the NEWS, so I was wondering if I
>>> could get clarification on whether this is now the intended behaviour,
>>> before I further modify the package. I must say that this will be a
>>> bit of a pain to "correct"...
>>>
>>> All the best,
>>> Jon
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Issue with seek() on gzipped connections in R-devel

2011-09-23 Thread Jon Clayden
Dear all,

In R-devel (2011-09-23 r57050), I'm running into a serious problem
with seek()ing on connections opened with gzfile(). A warning is
generated and the file position does not seek to the requested
location. It doesn't seem to occur all the time - I tried to create a
small example file to illustrate it, but the problem didn't occur.
However, it can be seen with a file I use for testing my packages,
which is available through the URL
:

> con <- gzfile("~/Downloads/maskedb0_lia.nii.gz","rb")
> seek(con, 352)
[1] 0
Warning message:
In seek.connection(con, 352) :
  seek on a gzfile connection returned an internal error
> seek(con, NA)
[1] 190

The same commands with the same file work as expected in R 2.13.1, and
have worked over many previous versions of R.

> con <- gzfile("~/Downloads/maskedb0_lia.nii.gz","rb")
> seek(con, 352)
[1] 0
> seek(con, NA)
[1] 352

My sessionInfo() output is:

R Under development (unstable) (2011-09-23 r57050)
Platform: x86_64-apple-darwin11.1.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_US.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] splines   stats graphics  grDevices utils datasets  methods
[8] base

other attached packages:
[1] tractor.nt_2.0.1  tractor.session_2.0.3 tractor.utils_2.0.0
[4] tractor.base_2.0.3reportr_0.2.0

This seems to occur whether or not R is compiled with
"--with-system-zlib". I see some zlib-related changes mentioned in the
NEWS, but I don't see any indication that this is expected. Could
anyone shed any light on it, please?

Thanks and all the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Issue with seek() on gzipped connections in R-devel

2011-09-23 Thread Jon Clayden
Thanks for the replies. I take the point, although it does seem like a
substantial regression (on non-Windows platforms).

I like to keep the external dependencies of my packages minimal, but I
will look into the mmap package - thanks, Jeff, for the tip.

Aside from that, though, what is the alternative to using seek? If I
want to read something at (original, uncompressed) byte offset 352, as
here, do I have to read and discard everything that comes before it
first? That seems inelegant at best...

Regards,
Jon


On 23 September 2011 16:54, Jeffrey Ryan  wrote:
> seek() in general is a bad idea IMO if you are writing cross-platform code.
>
> ?seek
>
> Warning:
>
>     Use of ‘seek’ on Windows is discouraged.  We have found so many
>     errors in the Windows implementation of file positioning that
>     users are advised to use it only at their own risk, and asked not
>     to waste the R developers' time with bug reports on Windows'
>     deficiencies.
>
> Aside from making me laugh, the above highlights the core reason to not use 
> IMO.
>
> For not zipped files, you can try the mmap package.  ?mmap and ?types
> are good starting points.  Allows for accessing binary data on disk
> with very simple R-like semantics, and is very fast.  Not as fast as a
> sequential read... but fast.  At present this is 'little endian' only
> though, but that describes most of the world today.
>
> Best,
> Jeff
>
> On Fri, Sep 23, 2011 at 8:58 AM, Jon Clayden  wrote:
>> Dear all,
>>
>> In R-devel (2011-09-23 r57050), I'm running into a serious problem
>> with seek()ing on connections opened with gzfile(). A warning is
>> generated and the file position does not seek to the requested
>> location. It doesn't seem to occur all the time - I tried to create a
>> small example file to illustrate it, but the problem didn't occur.
>> However, it can be seen with a file I use for testing my packages,
>> which is available through the URL
>> <https://github.com/jonclayden/tractor/blob/master/tests/data/nifti/maskedb0_lia.nii.gz?raw=true>:
>>
>>> con <- gzfile("~/Downloads/maskedb0_lia.nii.gz","rb")
>>> seek(con, 352)
>> [1] 0
>> Warning message:
>> In seek.connection(con, 352) :
>>  seek on a gzfile connection returned an internal error
>>> seek(con, NA)
>> [1] 190
>>
>> The same commands with the same file work as expected in R 2.13.1, and
>> have worked over many previous versions of R.
>>
>>> con <- gzfile("~/Downloads/maskedb0_lia.nii.gz","rb")
>>> seek(con, 352)
>> [1] 0
>>> seek(con, NA)
>> [1] 352
>>
>> My sessionInfo() output is:
>>
>> R Under development (unstable) (2011-09-23 r57050)
>> Platform: x86_64-apple-darwin11.1.0 (64-bit)
>>
>> locale:
>> [1] en_GB.UTF-8/en_US.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
>>
>> attached base packages:
>> [1] splines   stats     graphics  grDevices utils     datasets  methods
>> [8] base
>>
>> other attached packages:
>> [1] tractor.nt_2.0.1      tractor.session_2.0.3 tractor.utils_2.0.0
>> [4] tractor.base_2.0.3    reportr_0.2.0
>>
>> This seems to occur whether or not R is compiled with
>> "--with-system-zlib". I see some zlib-related changes mentioned in the
>> NEWS, but I don't see any indication that this is expected. Could
>> anyone shed any light on it, please?
>>
>> Thanks and all the best,
>> Jon
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>
>
> --
> Jeffrey Ryan
> jeffrey.r...@lemnica.com
>
> www.lemnica.com
> www.esotericR.com
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] all.equal: possible mismatch between behaviour and documentation

2015-07-28 Thread Jon Clayden
Dear all,

The documentation for `all.equal.numeric` says

Numerical comparisons for ‘scale = NULL’ (the default) are done by
first computing the mean absolute difference of the two numerical
vectors.  If this is smaller than ‘tolerance’ or not finite,
absolute differences are used, otherwise relative differences
scaled by the mean absolute difference.

But the actual behaviour of the function is to use relative
differences if the mean value of the first argument is greater than
`tolerance`:

all.equal(0.1, 0.102, tolerance=0.01)
# [1] "Mean relative difference: 0.02"

It seems to me that this example should produce `TRUE`, because
abs(0.1-0.102) < 0.01, but it does not, because abs(0.1) > 0.01. The
relevant section in the source seems to be

what <- if (is.null(scale)) {
xn <- mean(abs(target))
if (is.finite(xn) && xn > tolerance) {
xy <- xy/xn
"relative"
}
else "absolute"
}

I think `xy`, not `xn`, should be tested here.

The last line of the documentation, indicating that relative
differences are "scaled by the mean absolute difference" also seems
not to match the code, but in this aspect the code is surely right,
i.e., the relative difference is relative to the mean value, not the
mean difference.

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] all.equal: possible mismatch between behaviour and documentation

2015-07-28 Thread Jon Clayden
Sorry; minor clarification. The actual test criterion in the example I
gave is of course abs((0.1-0.102)/0.1) < 0.01, not abs(0.1) < 0.01. In
any case, this does not match (my reading of) the docs, and the result
is not `TRUE`.

Regards,
Jon


On 28 July 2015 at 11:58, Jon Clayden  wrote:
> Dear all,
>
> The documentation for `all.equal.numeric` says
>
> Numerical comparisons for ‘scale = NULL’ (the default) are done by
> first computing the mean absolute difference of the two numerical
> vectors.  If this is smaller than ‘tolerance’ or not finite,
> absolute differences are used, otherwise relative differences
> scaled by the mean absolute difference.
>
> But the actual behaviour of the function is to use relative
> differences if the mean value of the first argument is greater than
> `tolerance`:
>
> all.equal(0.1, 0.102, tolerance=0.01)
> # [1] "Mean relative difference: 0.02"
>
> It seems to me that this example should produce `TRUE`, because
> abs(0.1-0.102) < 0.01, but it does not, because abs(0.1) > 0.01. The
> relevant section in the source seems to be
>
> what <- if (is.null(scale)) {
> xn <- mean(abs(target))
> if (is.finite(xn) && xn > tolerance) {
> xy <- xy/xn
> "relative"
> }
> else "absolute"
> }
>
> I think `xy`, not `xn`, should be tested here.
>
> The last line of the documentation, indicating that relative
> differences are "scaled by the mean absolute difference" also seems
> not to match the code, but in this aspect the code is surely right,
> i.e., the relative difference is relative to the mean value, not the
> mean difference.
>
> All the best,
> Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] all.equal: possible mismatch between behaviour and documentation

2015-07-30 Thread Jon Clayden
Dear Martin,

Thank you for following up.

I appreciate that this is entrenched behaviour and that changing the
documentation may be preferable to changing the code in practice, and
accordingly I filed this as a documentation bug earlier today
(#16493).

But I don't agree that the current behaviour makes sense. Firstly, the
case where the magnitude of `tolerance` is greater than that of the
test vectors must surely be pretty rare. Who wants to test whether 1
and 2 are equal with a tolerance of 10?

Secondly, absolute error is (IMHO) more intuitive, and since the docs
don't emphasise that the function prefers relative error, I would
think that many users, like me, would expect absolute error to be
used. (My assumption, which the docs do not coherently contradict, has
been that absolute error is used to decide whether or not to return
`TRUE`, but if the vectors are not considered equal then relative
error is used in the return string.)

Finally, if the decision is about numerical precision in the
comparison then comparing `xn` to `tolerance` doesn't seem sensible.
Maybe it should be something like `xn * tolerance >
.Machine$double.eps`, i.e., to check whether the test criterion under
relative error would be within machine precision? Note that that would
make

all.equal(0.3, 0.1+0.2, tolerance=1e-16)
# [1] "Mean relative difference: 1.850372e-16"

test TRUE (on my system), since 0.3-(0.1+0.2) is approximately
-5.6e-17 (i.e., less in magnitude than 1e-16), while 0.3*1e-16 is less
than .Machine$double.eps of 2.2e-16 (so absolute error would be
chosen).

However, if the code will not be changed, I think the documentation
should (i) make clear that relative error is preferred where
appropriate; (ii) correct the line 2 mistake where it is stated that
the choice of relative or absolute error is determined by comparing
mean absolute difference to `tolerance`; and (iii) correct the final
line mistake where it is stated that relative errors are scaled by the
difference (which you have suggested alternatives for).

All the best,
Jon

On 30 July 2015 at 10:39, Martin Maechler  wrote:
> Dear Jon,
>
> thank you for raising the issue,
>
>>>>>> Jon Clayden 
>>>>>> on Tue, 28 Jul 2015 12:14:48 +0100 writes:
>
>> Sorry; minor clarification. The actual test criterion in the example I
>> gave is of course abs((0.1-0.102)/0.1) < 0.01, not abs(0.1) < 0.01. In
>> any case, this does not match (my reading of) the docs, and the result
>> is not `TRUE`.
>
>> Regards,
>> Jon
>
>> On 28 July 2015 at 11:58, Jon Clayden  wrote:
>> > Dear all,
>> >
>> > The documentation for `all.equal.numeric` says
>> >
>> > Numerical comparisons for ‘scale = NULL’ (the default) are done by
>> > first computing the mean absolute difference of the two numerical
>> > vectors.  If this is smaller than ‘tolerance’ or not finite,
>> > absolute differences are used, otherwise relative differences
>> > scaled by the mean absolute difference.
>> >
>> > But the actual behaviour of the function is to use relative
>> > differences if the mean value of the first argument is greater than
>> > `tolerance`:
>> >
>> > all.equal(0.1, 0.102, tolerance=0.01)
>> > # [1] "Mean relative difference: 0.02"
>> >
>> > It seems to me that this example should produce `TRUE`, because
>> > abs(0.1-0.102) < 0.01, but it does not, because abs(0.1) > 0.01.
>
> Irrespective of the documentation,
> the above example should continue to produce what it does now.
> These numbers are not close to zero (compared to tol), and so
> relative error should be used.
>
> The whole idea of all.equal.numeric() is to use  *relative* error/difference
> __unless__ that is not sensible anymore, namely when the
> denominator of the ratio which defines the relative error
> becomes too close to zero (and hence has to be seen as
> "unstable" / "unreliable").
>
> The exact behavior of all.equal.numeric() has __ I'm pretty sure, but
> can no longer easily prove __ been inherited from the original S
> implementation in most parts, and (if that's correct) has been
> in place for about 30 years  [ If not, it has "only" been in
> place about 17 years... ]
> notably the code below has been unchanged for a long time, and been in use
> in too many places to be changed now.
>
> So it is about the *documentation* only we should discuss changing.
>
>
>> > The relevant section in the source seems to be
>> >
>> > what <- if (is.null(scale)) {
>> > xn <- mean(abs(target))
>> > if (is.finite(

[Rd] Reading from an existing connection in compiled code

2016-02-19 Thread Jon Clayden
Dear all,

I'd like to be able to read from an arbitrary R connection (in the
sense of ?connections), which would be passed to an R function by the
user and then down into some C code via .Call.

The R API, in file R_ext/Connections.h, specifies a function,
R_ReadConnection, which takes a pointer to an Rconn struct as its
first argument, and does what I want. The struct itself is also
defined in that header, but I see no way of retrieving a struct of
that type, aside from getConnection (the C function), which is not
part of the API. As far as I can tell, the external pointer associated
with the connection also does not point to the struct directly.

So, could anyone please tell me whether there is a supported way to
convert a suitable SEXP to a pointer to the associated Rconn struct?

Thanks in advance.

Jon

(This question was also posted yesterday to StackOverflow
,
but no definitive answer has been posted.)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Cross-platform linking of a simple front-end

2013-07-04 Thread Jon Clayden
Dear all,

I have a simple front-end program which uses the APIs described in section
8 of "Writing R Extensions" to deviate from the standard R behaviour in
fairly minor ways. However, I'm having some difficulty getting it to link
reliably across different platforms.

R CMD LINK seemed like it would help, but I've had difficulty finding many
real-world examples online. I've tried

  R CMD LINK $(R CMD config CC) $(R CMD config --cppflags) $(R CMD config
--ldflags) -o ../bin/exec/tractor tractor.c

and this works on one of my test platforms (OS X.8.4, R 3.0.1), but not the
other (Ubuntu 12.04 LTS, R 2.14.1). In the latter case I get the error

  /usr/bin/ld: /tmp/ccmKf57E.o: undefined reference to symbol 'log10@
@GLIBC_2.0'
  /usr/bin/ld: note: 'log10@@GLIBC_2.0' is defined in DSO
/lib/i386-linux-gnu/libm.so.6 so try adding it to the linker command line
  /lib/i386-linux-gnu/libm.so.6: could not read symbols: Invalid operation
  collect2: ld returned 1 exit status

I can correct this by adding "-lm" manually to the command, but I'm not
sure how portable that will itself be.

Could anyone advise on the best way to make this work portably, please? For
this application I'm not concerned about Windows compatibility -
portability across Unix-alikes is sufficient. The source code is at <
https://github.com/jonclayden/tractor/blob/master/src/tractor.c>, if that
is useful.

All the best,
Jon

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Cross-platform linking of a simple front-end

2013-07-08 Thread Jon Clayden
Thank you both for the replies. The shared library aspect didn't seem to be
the problem, and while I looked into including $R_HOME/etc/Makevars in my
Makefile and using variables defined there, that seemed to create other
difficulties.

Instead I decided for the time being to avoid using mathematical functions
in my code, and thereby skirt the problem. R CMD LINK now builds the binary
successfully on both platforms.

Regards,
Jon



On 6 July 2013 06:52, Prof Brian Ripley  wrote:

> On 06/07/2013 03:19, Simon Urbanek wrote:
>
>> Jon,
>>
>> On Jul 4, 2013, at 10:52 AM, Jon Clayden wrote:
>>
>>  Dear all,
>>>
>>> I have a simple front-end program which uses the APIs described in
>>> section
>>> 8 of "Writing R Extensions" to deviate from the standard R behaviour in
>>> fairly minor ways. However, I'm having some difficulty getting it to link
>>> reliably across different platforms.
>>>
>>> R CMD LINK seemed like it would help, but I've had difficulty finding
>>> many
>>> real-world examples online. I've tried
>>>
>>>   R CMD LINK $(R CMD config CC) $(R CMD config --cppflags) $(R CMD config
>>> --ldflags) -o ../bin/exec/tractor tractor.c
>>>
>>> and this works on one of my test platforms (OS X.8.4, R 3.0.1), but not
>>> the
>>> other (Ubuntu 12.04 LTS, R 2.14.1). In the latter case I get the error
>>>
>>>   /usr/bin/ld: /tmp/ccmKf57E.o: undefined reference to symbol 'log10@
>>> @GLIBC_2.0'
>>>   /usr/bin/ld: note: 'log10@@GLIBC_2.0' is defined in DSO
>>> /lib/i386-linux-gnu/libm.so.6 so try adding it to the linker command line
>>>   /lib/i386-linux-gnu/libm.so.6: could not read symbols: Invalid
>>> operation
>>>   collect2: ld returned 1 exit status
>>>
>>> I can correct this by adding "-lm" manually to the command, but I'm not
>>> sure how portable that will itself be.
>>>
>>>
>> My guess would be that you did not use --enable-R-shlib when compiling R
>> on Ubuntu so you don't have a shared version of the R library to link
>> against (which is needed to resolve the dependencies). Could that be the
>> case?
>>
>
> I was able to reproduce this on Fedora: that is not the error if R was not
> built as a shared library.
>
> I would simply copy how R does it (for R.bin in src/main).  libtool (used
> by R CMD LINK) is not coming up with the same flags.  On my system R is not
> using -lm:
>
> gcc -std=gnu99 -Wl,--export-dynamic -fopenmp  -L/usr/local/lib64 -o R.bin
> Rmain.o  -L../../lib -lR -lRblas
>
>
> Adding -lm is not portable (some OSes do not have a separate libm and some
> always add it when linking via $(CC)), but there is a LIBM macro in
> etc/Makeconf which tells you if configure found one.
>
>
>
>
>  Cheers,
>> Simon
>>
>>
>>  Could anyone advise on the best way to make this work portably, please?
>>> For
>>> this application I'm not concerned about Windows compatibility -
>>> portability across Unix-alikes is sufficient. The source code is at <
>>> https://github.com/jonclayden/**tractor/blob/master/src/**tractor.c<https://github.com/jonclayden/tractor/blob/master/src/tractor.c>>,
>>> if that
>>> is useful.
>>>
>>> All the best,
>>> Jon
>>>
>>
>
>
> --
> Brian D. Ripley,  rip...@stats.ox.ac.uk
> Professor of Applied Statistics,  
> http://www.stats.ox.ac.uk/~**ripley/<http://www.stats.ox.ac.uk/~ripley/>
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Determining the resolution of a screen device

2014-01-14 Thread Jon Clayden
Dear all,

I am trying to find a way to reliably and programmatically establish the
resolution (i.e. DPI or equivalent) of an on-screen device. It seemed to me
that

  dev.new(width=1, height=1)
  dpi <- dev.size("px")

would do the trick, but the result does not seem to be correct, at least on
OS X 10.9.1 using the "quartz" device. Specifically, the window that
appears is 1 inch square, as expected, but the result from dev.size() is
c(72,72), which isn't correct. My display is 1440x900, but if I call

  dev.new(width=720/72, height=450/72)

the resulting device fills much more than half the screen. But R gets the
size right in inches, so it, or something it calls, must presumably know
the real DPI value.

So, could anyone tell me whether there is a reliable way to determine DPI,
please?

All the best,
Jon

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Timezone warnings on package install in R-alpha

2014-03-24 Thread Jon Clayden
Dear all,

As of the current R alpha release, I'm seeing timezone-related warnings on
installing any package (including the recommended ones), which I haven't
seen before. For example,

[~/Documents/Source/R-alpha]$ bin/R CMD INSTALL ~/git/tractor/lib/reportr
* installing to library '/Users/jon/Documents/Source/R-alpha/library'
* installing *source* package 'reportr' ...
Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
Warning in as.POSIXlt.POSIXct(x, tz) :
  unknown timezone 'America/New_York'
Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
Warning in as.POSIXlt.POSIXct(x, tz) :
  unknown timezone 'America/New_York'
** R
** preparing package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded
* DONE (reportr)

This is R-alpha r65266, built from source on OS X 10.9.2 using gcc 4.8.2. I
ran configure with

./configure --with-blas="-framework Accelerate" --with-lapack
--with-system-zlib --enable-memory-profiling
--with-tcl-config=/System/Library/Frameworks/Tcl.framework/tclConfig.sh
--with-tk-config=/System/Library/Frameworks/Tk.framework/tkConfig.sh
CC=gcc-4.8 CXX=g++-4.8 OBJC=clang F77=gfortran-4.8 FC=gfortran-4.8
CPPFLAGS="-D__ACCELERATE__" CFLAGS="-mtune=native -g -O2"
CXXFLAGS="-mtune=native -g -O2" FFLAGS="-mtune=native -g -O2"
FCFLAGS="-mtune=native -g -O2"

Session info is

R version 3.1.0 alpha (2014-03-23 r65266)
Platform: x86_64-apple-darwin13.1.0 (64-bit)

locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

I see some related material in the NEWS, but no indication that these
warnings are expected. I hope this report is helpful.

All the best,
Jon

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Timezone warnings on package install in R-alpha

2014-04-03 Thread Jon Clayden
For what it's worth, this issue persists in R-rc_2014-04-02_r65358.

Regards,
Jon


On 24 March 2014 10:40, Jon Clayden  wrote:

> Dear all,
>
> As of the current R alpha release, I'm seeing timezone-related warnings on
> installing any package (including the recommended ones), which I haven't
> seen before. For example,
>
> [~/Documents/Source/R-alpha]$ bin/R CMD INSTALL ~/git/tractor/lib/reportr
> * installing to library '/Users/jon/Documents/Source/R-alpha/library'
> * installing *source* package 'reportr' ...
> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> Warning in as.POSIXlt.POSIXct(x, tz) :
>   unknown timezone 'America/New_York'
> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> Warning in as.POSIXlt.POSIXct(x, tz) :
>   unknown timezone 'America/New_York'
> ** R
> ** preparing package for lazy loading
> ** help
> *** installing help indices
> ** building package indices
> ** testing if installed package can be loaded
> * DONE (reportr)
>
> This is R-alpha r65266, built from source on OS X 10.9.2 using gcc 4.8.2.
> I ran configure with
>
> ./configure --with-blas="-framework Accelerate" --with-lapack
> --with-system-zlib --enable-memory-profiling
> --with-tcl-config=/System/Library/Frameworks/Tcl.framework/tclConfig.sh
> --with-tk-config=/System/Library/Frameworks/Tk.framework/tkConfig.sh
> CC=gcc-4.8 CXX=g++-4.8 OBJC=clang F77=gfortran-4.8 FC=gfortran-4.8
> CPPFLAGS="-D__ACCELERATE__" CFLAGS="-mtune=native -g -O2"
> CXXFLAGS="-mtune=native -g -O2" FFLAGS="-mtune=native -g -O2"
> FCFLAGS="-mtune=native -g -O2"
>
> Session info is
>
> R version 3.1.0 alpha (2014-03-23 r65266)
> Platform: x86_64-apple-darwin13.1.0 (64-bit)
>
> locale:
> [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
>
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
>
> I see some related material in the NEWS, but no indication that these
> warnings are expected. I hope this report is helpful.
>
> All the best,
> Jon
>
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Timezone warnings on package install in R-alpha

2014-04-03 Thread Jon Clayden
Many thanks, Prof Ripley. The "--without-internal-tzcode" option does
indeed resolve the problem.

Regards,
Jon


On 3 April 2014 13:38, Prof Brian Ripley  wrote:

> On 03/04/2014 13:27, peter dalgaard wrote:
>
>> I'm seeing nothing of the sort with the nightly build of 3.1.0RC, also on
>> 10.9.2. This is a plain-vanilla Xcode+ancillaries build as per Simon's
>> instructions (I think):
>>
>> pd$ more config.site
>> r_arch=${r_arch:=x86_64}
>> CC="gcc -arch $r_arch"
>> CXX="g++ -arch $r_arch"
>> F77="gfortran -arch $r_arch"
>> FC="gfortran -arch $r_arch"
>> OBJC="gcc -arch $r_arch"
>> with_blas="-framework vecLib"
>> with_lapack=yes
>>
>> so either something is up specifically with gcc-4.8, or you managed to
>> hose your time zone data base somehow (/usr/share/zoneinfo, I suppose).
>>
>
> More likely the one shipping with R, since --with-internal-tzcode is the
> default on OS X [*].  Setting TZDIR incorrectly would do this:
>
> > Sys.time()
> [1] "2014-04-03 12:37:01 GMT"
> Warning messages:
> 1: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
> 2: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> 3: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'America/New_York'
>
> when I do that.
>
> You could try --without-internal-tzcode.
>
> [*] Although x86_64 OS X has a 64-bit time_t it seems to have a 32-bit
> time-zone database and so wraps around.
>
>
>
>  - Peter D.
>>
>> On 03 Apr 2014, at 13:24 , Jon Clayden  wrote:
>>
>>  For what it's worth, this issue persists in R-rc_2014-04-02_r65358.
>>>
>>> Regards,
>>> Jon
>>>
>>>
>>> On 24 March 2014 10:40, Jon Clayden  wrote:
>>>
>>>  Dear all,
>>>>
>>>> As of the current R alpha release, I'm seeing timezone-related warnings
>>>> on
>>>> installing any package (including the recommended ones), which I haven't
>>>> seen before. For example,
>>>>
>>>> [~/Documents/Source/R-alpha]$ bin/R CMD INSTALL
>>>> ~/git/tractor/lib/reportr
>>>> * installing to library '/Users/jon/Documents/Source/R-alpha/library'
>>>> * installing *source* package 'reportr' ...
>>>> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
>>>> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
>>>> Warning in as.POSIXlt.POSIXct(x, tz) :
>>>>   unknown timezone 'America/New_York'
>>>> Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
>>>> Warning in as.POSIXlt.POSIXct(x, tz) :
>>>>   unknown timezone 'America/New_York'
>>>> ** R
>>>> ** preparing package for lazy loading
>>>> ** help
>>>> *** installing help indices
>>>> ** building package indices
>>>> ** testing if installed package can be loaded
>>>> * DONE (reportr)
>>>>
>>>> This is R-alpha r65266, built from source on OS X 10.9.2 using gcc
>>>> 4.8.2.
>>>> I ran configure with
>>>>
>>>> ./configure --with-blas="-framework Accelerate" --with-lapack
>>>> --with-system-zlib --enable-memory-profiling
>>>> --with-tcl-config=/System/Library/Frameworks/Tcl.framework/tclConfig.sh
>>>> --with-tk-config=/System/Library/Frameworks/Tk.framework/tkConfig.sh
>>>> CC=gcc-4.8 CXX=g++-4.8 OBJC=clang F77=gfortran-4.8 FC=gfortran-4.8
>>>> CPPFLAGS="-D__ACCELERATE__" CFLAGS="-mtune=native -g -O2"
>>>> CXXFLAGS="-mtune=native -g -O2" FFLAGS="-mtune=native -g -O2"
>>>> FCFLAGS="-mtune=native -g -O2"
>>>>
>>>> Session info is
>>>>
>>>> R version 3.1.0 alpha (2014-03-23 r65266)
>>>> Platform: x86_64-apple-darwin13.1.0 (64-bit)
>>>>
>>>> locale:
>>>> [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
>>>>
>>>> attached base packages:
>>>> [1] stats graphics  grDevices utils datasets  methods   base
>>>>
>>>> I see some related material in the NEWS, but no indication that these
>>>> warnings are expected. I hope this report is helpful.
>>>>
>>>> All the best,
>>>> Jon
>>>>
>>>>
>>>>
>>> [[alternative HTML version deleted]]
>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>
>>
>
> --
> Brian D. Ripley,  rip...@stats.ox.ac.uk
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Timezone warnings on package install in R-alpha

2014-04-03 Thread Jon Clayden
That doesn't seem to be the case. After rebuilding using the old configure
options, I see

> Sys.getenv("TZDIR")
[1] ""

Jon


On 3 April 2014 14:39, peter dalgaard  wrote:

> Thanks to Brian. Yet another thing that zoomed by without me really
> noticing.
>
> However, I'd like to be sure that it isn't a "make dist" issue. We do seem
> to ship the correct files in src/extra/tzone, but could you please check
> Brian's suggestion about TZDIR possibly being set incorrectly?
>
> -pd
>
> On 03 Apr 2014, at 14:47 , Jon Clayden  wrote:
>
> > Many thanks, Prof Ripley. The "--without-internal-tzcode" option does
> indeed resolve the problem.
> >
> > Regards,
> > Jon
> >
> >
> > On 3 April 2014 13:38, Prof Brian Ripley  wrote:
> > On 03/04/2014 13:27, peter dalgaard wrote:
> > I'm seeing nothing of the sort with the nightly build of 3.1.0RC, also
> on 10.9.2. This is a plain-vanilla Xcode+ancillaries build as per Simon's
> instructions (I think):
> >
> > pd$ more config.site
> > r_arch=${r_arch:=x86_64}
> > CC="gcc -arch $r_arch"
> > CXX="g++ -arch $r_arch"
> > F77="gfortran -arch $r_arch"
> > FC="gfortran -arch $r_arch"
> > OBJC="gcc -arch $r_arch"
> > with_blas="-framework vecLib"
> > with_lapack=yes
> >
> > so either something is up specifically with gcc-4.8, or you managed to
> hose your time zone data base somehow (/usr/share/zoneinfo, I suppose).
> >
> > More likely the one shipping with R, since --with-internal-tzcode is the
> default on OS X [*].  Setting TZDIR incorrectly would do this:
> >
> > > Sys.time()
> > [1] "2014-04-03 12:37:01 GMT"
> > Warning messages:
> > 1: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
> > 2: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> > 3: In as.POSIXlt.POSIXct(x, tz) : unknown timezone 'America/New_York'
> >
> > when I do that.
> >
> > You could try --without-internal-tzcode.
> >
> > [*] Although x86_64 OS X has a 64-bit time_t it seems to have a 32-bit
> time-zone database and so wraps around.
> >
> >
> >
> > - Peter D.
> >
> > On 03 Apr 2014, at 13:24 , Jon Clayden  wrote:
> >
> > For what it's worth, this issue persists in R-rc_2014-04-02_r65358.
> >
> > Regards,
> > Jon
> >
> >
> > On 24 March 2014 10:40, Jon Clayden  wrote:
> >
> > Dear all,
> >
> > As of the current R alpha release, I'm seeing timezone-related warnings
> on
> > installing any package (including the recommended ones), which I haven't
> > seen before. For example,
> >
> > [~/Documents/Source/R-alpha]$ bin/R CMD INSTALL ~/git/tractor/lib/reportr
> > * installing to library '/Users/jon/Documents/Source/R-alpha/library'
> > * installing *source* package 'reportr' ...
> > Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'Europe/London'
> > Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> > Warning in as.POSIXlt.POSIXct(x, tz) :
> >   unknown timezone 'America/New_York'
> > Warning in as.POSIXlt.POSIXct(x, tz) : unknown timezone 'GMT'
> > Warning in as.POSIXlt.POSIXct(x, tz) :
> >   unknown timezone 'America/New_York'
> > ** R
> > ** preparing package for lazy loading
> > ** help
> > *** installing help indices
> > ** building package indices
> > ** testing if installed package can be loaded
> > * DONE (reportr)
> >
> > This is R-alpha r65266, built from source on OS X 10.9.2 using gcc 4.8.2.
> > I ran configure with
> >
> > ./configure --with-blas="-framework Accelerate" --with-lapack
> > --with-system-zlib --enable-memory-profiling
> > --with-tcl-config=/System/Library/Frameworks/Tcl.framework/tclConfig.sh
> > --with-tk-config=/System/Library/Frameworks/Tk.framework/tkConfig.sh
> > CC=gcc-4.8 CXX=g++-4.8 OBJC=clang F77=gfortran-4.8 FC=gfortran-4.8
> > CPPFLAGS="-D__ACCELERATE__" CFLAGS="-mtune=native -g -O2"
> > CXXFLAGS="-mtune=native -g -O2" FFLAGS="-mtune=native -g -O2"
> > FCFLAGS="-mtune=native -g -O2"
> >
> > Session info is
> >
> > R version 3.1.0 alpha (2014-03-23 r65266)
> > Platform: x86_64-apple-darwin13.1.0 (64-bit)
> >
> > locale:
> > [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
> >
> >

[Rd] Dealing with printf() &c. in third-party library code

2012-03-14 Thread Jon Clayden
Dear all,

I recognise the reason for strongly discouraging use of printf() and
similar C functions in R packages, but I wonder what people do in
practice about third-party code which may be littered with such calls.
I maintain a package (RNiftyReg) which provides an R interface to a
third-party library which contains hundreds of calls to printf(...),
fprintf(stderr,...) and similar. It seems to me that there are several
possible approaches, but all have their issues:

1. Replace all such calls with equivalent Rprintf() calls, using
compiler preprocessing directives to ensure the library does not
become incompatible with other code. For example,

#ifdef RNIFTYREG
Rprintf(...);
#else
printf(...);
#endif

This will be very time-consuming if there are lots of calls, and also
makes the code very untidy and much harder to update when a new
version of the upstream library is released.

2. Remove all such calls from the code altogether, or comment them
out. The problem here is that doing this safely is hard, because the
call could be part of an "if" statement or similar. For example,

if (test)
 printf("Something");
do_something_important;

If the middle line here is removed, then the last line becomes
(erroneously) conditioned on the test. Plus, once again, you are
introducing a lot of small changes to the library itself.

3. Redefine printf to use Rprintf, viz.

#ifdef RNIFTYREG
#include 
#define printf Rprintf
#endif

This will compile as long as the R function is a drop-in replacement
for the original function, which I believe is true for Rprintf (vs.
printf), but isn't true for Calloc (vs. calloc), for example. And I'm
not sure whether this approach can be used to deal with cases of the
form fprintf(stderr,...), where stderr would need to be redefined.
This approach requires only modest changes to the library itself, but
may be fragile to future changes in R.

Are there any other (better?) alternatives? Any thoughts or advice
would be appreciated.

All the best,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Dealing with printf() &c. in third-party library code

2012-03-15 Thread Jon Clayden
Martin,

Thanks for your reply. I wonder if you'd be willing to post your
"my_fprintf" function, since I'm struggling to get around needing to
use the "stdout" and "stderr" symbols completely. This function has
the right effect...

void rniftyreg_fprintf (FILE *stream, const char *format, ...)
{
   va_list args;
   va_start(args, format);

   if (stream == stdout)
   Rvprintf(format, args);
   else if (stream == stderr)
   REvprintf(format, args);
   else
   vfprintf(stream, format, args);

   va_end(args);
}

... but the R CMD check info message still arises because stdout and
stderr still appear. I'm struggling to see how to get around this
without doing something really ugly, like casting integers to FILE*
pointers.

All the best,
Jon


On 15 March 2012 05:04, Martin Morgan  wrote:
> On 03/14/2012 05:15 AM, Jon Clayden wrote:
>>
>> Dear all,
>>
>> I recognise the reason for strongly discouraging use of printf() and
>> similar C functions in R packages, but I wonder what people do in
>> practice about third-party code which may be littered with such calls.
>> I maintain a package (RNiftyReg) which provides an R interface to a
>> third-party library which contains hundreds of calls to printf(...),
>> fprintf(stderr,...) and similar. It seems to me that there are several
>> possible approaches, but all have their issues:
>>
>> 1. Replace all such calls with equivalent Rprintf() calls, using
>> compiler preprocessing directives to ensure the library does not
>> become incompatible with other code. For example,
>>
>> #ifdef RNIFTYREG
>> Rprintf(...);
>> #else
>> printf(...);
>> #endif
>>
>> This will be very time-consuming if there are lots of calls, and also
>> makes the code very untidy and much harder to update when a new
>> version of the upstream library is released.
>>
>> 2. Remove all such calls from the code altogether, or comment them
>> out. The problem here is that doing this safely is hard, because the
>> call could be part of an "if" statement or similar. For example,
>>
>> if (test)
>>  printf("Something");
>> do_something_important;
>>
>> If the middle line here is removed, then the last line becomes
>> (erroneously) conditioned on the test. Plus, once again, you are
>> introducing a lot of small changes to the library itself.
>>
>> 3. Redefine printf to use Rprintf, viz.
>>
>> #ifdef RNIFTYREG
>> #include
>> #define printf Rprintf
>> #endif
>>
>> This will compile as long as the R function is a drop-in replacement
>> for the original function, which I believe is true for Rprintf (vs.
>> printf), but isn't true for Calloc (vs. calloc), for example. And I'm
>> not sure whether this approach can be used to deal with cases of the
>> form fprintf(stderr,...), where stderr would need to be redefined.
>> This approach requires only modest changes to the library itself, but
>> may be fragile to future changes in R.
>>
>> Are there any other (better?) alternatives? Any thoughts or advice
>> would be appreciated.
>
>
> In Makevars, I add -Dfprintf=my_fprintf to the pre-processor flags and then
> implement my_fprintf in a separate source file. This means that the source
> code of the 3rd party library is not touched, and there is some scope for
> re-mapping or otherwise intercepting function arguments. For abort and
> error, I throw an error that encourages the user to save and quit
> immediately, though this is far from ideal. I too would be interested in
> better practices for dealing with this, short of whole-sale modification of
> the third-party library.
>
> Martin
>
>>
>> All the best,
>> Jon
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>
>
> --
> Computational Biology
> Fred Hutchinson Cancer Research Center
> 1100 Fairview Ave. N. PO Box 19024 Seattle, WA 98109
>
> Location: M1-B861
> Telephone: 206 667-2793

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Dealing with printf() &c. in third-party library code

2012-03-16 Thread Jon Clayden
On 16 March 2012 00:48, Martin Morgan  wrote:
> On 03/15/2012 02:24 PM, Jon Clayden wrote:
>>
>> Martin,
>>
>> Thanks for your reply. I wonder if you'd be willing to post your
>> "my_fprintf" function, since I'm struggling to get around needing to
>> use the "stdout" and "stderr" symbols completely. This function has
>> the right effect...
>>
>> void rniftyreg_fprintf (FILE *stream, const char *format, ...)
>> {
>>    va_list args;
>>    va_start(args, format);
>>
>>    if (stream == stdout)
>>        Rvprintf(format, args);
>>    else if (stream == stderr)
>>        REvprintf(format, args);
>>    else
>>        vfprintf(stream, format, args);
>>
>>    va_end(args);
>> }
>>
>> ... but the R CMD check info message still arises because stdout and
>> stderr still appear. I'm struggling to see how to get around this
>> without doing something really ugly, like casting integers to FILE*
>> pointers.
>
>
> Hi Jon --
>
> My own implementation is like yours, where I still reference stderr /
> stdout. But it seems like the meaningful problem (writing to stderr /
> stdout, which R might have re-directed) has been addressed (except for the
> third branch, in your code above).

Yes, this certainly deals with the important issue - the check note
becomes a false positive. I need the third branch to allow things to
work properly for fprintf calls which write to actual files...

Thanks again,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Compilation failure on Solaris: any advice?

2012-12-03 Thread Jon Clayden
Dear all,

The current version of my RNiftyReg package is failing to compile on CRAN's
Solaris testbed, but I don't have access to a Solaris system to debug on,
and Googling the error hasn't been very helpful. The error is

CC -library=stlport4 -I/home/ripley/R/cc/include -DNDEBUG -DNDEBUG
-DRNIFTYREG -I/usr/local/include-KPIC  -O -xlibmil -xtarget=native
-nofstore  -c niftyreg.cpp -o niftyreg.o
"_reg_f3d_sym.cpp", line 25: Error: reg_f3d may not have a type qualifier.
"niftyreg.cpp", line 527: Where: While instantiating
"reg_f3d_sym::reg_f3d_sym(int, int)".
"niftyreg.cpp", line 527: Where: Instantiated from non-template code.
"_reg_f3d_sym.cpp", line 26: Error: reg_f3d cannot be initialized
in a constructor.
"niftyreg.cpp", line 527: Where: While instantiating
"reg_f3d_sym::reg_f3d_sym(int, int)".
"niftyreg.cpp", line 527: Where: Instantiated from non-template code.
"_reg_f3d_sym.cpp", line 26: Error: Could not find
reg_f3d::reg_f3d() to initialize base class.
"niftyreg.cpp", line 527: Where: While instantiating
"reg_f3d_sym::reg_f3d_sym(int, int)".
"niftyreg.cpp", line 527: Where: Instantiated from non-template code.
3 Error(s) detected.
*** Error code 2
make: Fatal error: Command failed for target `niftyreg.o'


(Full log at [1].) The relevant part of the source is a C++ class
constructor, part of the library that my package interfaces to:

template 
reg_f3d_sym::reg_f3d_sym(int refTimePoint,int floTimePoint)
:reg_f3d::reg_f3d(refTimePoint,floTimePoint)
{
this->executableName=(char *)"NiftyReg F3D SYM";

this->backwardControlPointGrid=NULL;
this->backwardWarped=NULL;
this->backwardWarpedGradientImage=NULL;
this->backwardDeformationFieldImage=NULL;
this->backwardVoxelBasedMeasureGradientImage=NULL;
this->backwardNodeBasedGradientImage=NULL;

this->backwardBestControlPointPosition=NULL;
this->backwardConjugateG=NULL;
this->backwardConjugateH=NULL;

this->backwardProbaJointHistogram=NULL;
this->backwardLogJointHistogram=NULL;

this->floatingMaskImage=NULL;
this->currentFloatingMask=NULL;
this->floatingMaskPyramid=NULL;
this->backwardActiveVoxelNumber=NULL;

this->inverseConsistencyWeight=0.1;

#ifndef NDEBUG
printf("[NiftyReg DEBUG] reg_f3d_sym constructor called\n");
#endif
}

The error does not occur on any Windows, Linux or OS X system which I have
access to, so this would seem to be an issue relating to the Solaris
compiler toolchain in particular. Can anyone shed any light on it, please?

Thanks in advance,
Jon

--
[1]
http://www.r-project.org/nosvn/R.check/r-patched-solaris-x86/RNiftyReg-00install.html

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Compilation failure on Solaris: any advice?

2012-12-06 Thread Jon Clayden
Dear Elijah,

Many thanks for the reply.

Is that a build with "good old" Studio or a build with a recent GCC?
>

According to <
http://cran.r-project.org/web/checks/check_flavors.html#r-patched-solaris-x86>
it's Studio 12.3 on Solaris 10.


> I don't have any direct comments that would be helpful to you - but let me
> know if you need a place to do some test builds and try to figure it out.
>  I can certainly help you with that.
>

That would be fantastic - thank you very much for the offer. I would hope
that it should be fairly quick to resolve (don't we always!), so I wouldn't
need access for long.

Regards,
Jon


>
> [Are more Solaris-esque build slaves needed?  Someone give a shout if
> so... we can sponsor some infrastructure there.]
>
> --elijah
> (@Joyent)
>
>
>
> On Mon, Dec 3, 2012 at 11:28 AM, Jon Clayden wrote:
>
>> Dear all,
>>
>> The current version of my RNiftyReg package is failing to compile on
>> CRAN's
>> Solaris testbed, but I don't have access to a Solaris system to debug on,
>> and Googling the error hasn't been very helpful. The error is
>>
>> CC -library=stlport4 -I/home/ripley/R/cc/include -DNDEBUG -DNDEBUG
>> -DRNIFTYREG -I/usr/local/include-KPIC  -O -xlibmil -xtarget=native
>> -nofstore  -c niftyreg.cpp -o niftyreg.o
>> "_reg_f3d_sym.cpp", line 25: Error: reg_f3d may not have a type qualifier.
>> "niftyreg.cpp", line 527: Where: While instantiating
>> "reg_f3d_sym::reg_f3d_sym(int, int)".
>> "niftyreg.cpp", line 527: Where: Instantiated from non-template code.
>> "_reg_f3d_sym.cpp", line 26: Error: reg_f3d cannot be initialized
>> in a constructor.
>> "niftyreg.cpp", line 527: Where: While instantiating
>> "reg_f3d_sym::reg_f3d_sym(int, int)".
>> "niftyreg.cpp", line 527: Where: Instantiated from non-template code.
>> "_reg_f3d_sym.cpp", line 26: Error: Could not find
>> reg_f3d::reg_f3d() to initialize base class.
>> "niftyreg.cpp", line 527: Where: While instantiating
>> "reg_f3d_sym::reg_f3d_sym(int, int)".
>> "niftyreg.cpp", line 527: Where: Instantiated from non-template code.
>> 3 Error(s) detected.
>> *** Error code 2
>> make: Fatal error: Command failed for target `niftyreg.o'
>>
>>
>> (Full log at [1].) The relevant part of the source is a C++ class
>> constructor, part of the library that my package interfaces to:
>>
>> template 
>> reg_f3d_sym::reg_f3d_sym(int refTimePoint,int floTimePoint)
>> :reg_f3d::reg_f3d(refTimePoint,floTimePoint)
>> {
>> this->executableName=(char *)"NiftyReg F3D SYM";
>>
>> this->backwardControlPointGrid=NULL;
>> this->backwardWarped=NULL;
>> this->backwardWarpedGradientImage=NULL;
>> this->backwardDeformationFieldImage=NULL;
>> this->backwardVoxelBasedMeasureGradientImage=NULL;
>> this->backwardNodeBasedGradientImage=NULL;
>>
>> this->backwardBestControlPointPosition=NULL;
>> this->backwardConjugateG=NULL;
>> this->backwardConjugateH=NULL;
>>
>> this->backwardProbaJointHistogram=NULL;
>> this->backwardLogJointHistogram=NULL;
>>
>> this->floatingMaskImage=NULL;
>> this->currentFloatingMask=NULL;
>> this->floatingMaskPyramid=NULL;
>> this->backwardActiveVoxelNumber=NULL;
>>
>> this->inverseConsistencyWeight=0.1;
>>
>> #ifndef NDEBUG
>> printf("[NiftyReg DEBUG] reg_f3d_sym constructor called\n");
>> #endif
>> }
>>
>> The error does not occur on any Windows, Linux or OS X system which I have
>> access to, so this would seem to be an issue relating to the Solaris
>> compiler toolchain in particular. Can anyone shed any light on it, please?
>>
>> Thanks in advance,
>> Jon
>>
>> --
>> [1]
>>
>> http://www.r-project.org/nosvn/R.check/r-patched-solaris-x86/RNiftyReg-00install.html
>>
>> [[alternative HTML version deleted]]
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Catching errors from solve() with near-singular matrices

2012-12-11 Thread Jon Clayden
Dear David,

I can think of two strategies for dealing with this problem:
>
> Strategy 1: Some code like this:
>if (det(X) < epsilon) {
>   warning("Near singular matrix")
>   return(NULL)
>}
>return(solve(X))


This solution is probably the easiest one to take, but to match
solve.default, the test should be

  if (rcond(X) < .Machine$double.eps)

Catching that case should avoid the error. I hope this helps.

All the best,
Jon

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Running R scripts with interactive-style evaluation

2013-02-26 Thread Jon Clayden
If you're intending to run some code that may require user input, then I
share your need. I started two threads on this some time ago [1,2], but as
far as I know it still isn't possible. My workaround is to use "expect", or
to create a temporary .Rprofile if that is not available, from within a
shell script wrapper (see [3], lines 197-221). It isn't pretty, and I'd
love to see support for this kind of use case in R proper (happy to
contribute my time to help if someone with better knowledge of the R source
could guide me), but it's the best solution I've found.

All the best,
Jon

--
[1] https://stat.ethz.ch/pipermail/r-help/2008-January/150786.html
[2] https://stat.ethz.ch/pipermail/r-devel/2008-September/050803.html
[3] https://github.com/jonclayden/tractor/blob/master/bin/tractor#L197



On 26 February 2013 10:07, Marc Aurel Kiefer  wrote:

> Hi,
>
> when running a R-script like this:
>
> enable_magic()
> compute_stuff()
> disable_magic()
>
> the whole script is parsed into a single expression and then evaluated,
> whereas when using the interactive shell after each line entered, a REPL
> loop happens.
>
> Is there a way to make a script evaluation behave like this, because I
> need a single REPL iteration for every expression in the script.
>
> It doesn't matter if it's a source()-like way or "R CMD BATCH" or even
> feeding stdin to R or whatever...
>
> Regards,
>
> Marc
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] readBin() arg check has unnecessary overhead (patch included)

2009-08-11 Thread Jon Clayden
Dear all,

The version of readBin() in R-devel includes a use of match(), through
`%in%`, which can affect its performance significantly. By using
primitives instead of the rather expensive call to match(), I reduce
the time spent inside readBin() by more than 30% in some of my code
(part of the tractor.base package). A simple patch that does this is
given below. This passes "make check-devel" fine, and I don't see that
it could produce unexpected behaviour -- though I may, of course, be
wrong.

Regards,
Jon

--- R-devel/src/library/base/R/connections.R2009-08-07 01:52:16.0 
+0100
+++ R-devel-mod/src/library/base/R/connections.R2009-08-11
16:22:30.0 +0100
@@ -193,6 +193,6 @@
 swap <- endian != .Platform$endian
 if(!is.character(what) || length(what) != 1L
-   || !(what %in% c("numeric", "double", "integer", "int", "logical",
- "complex", "character", "raw")))
+   || !any(what == c("numeric", "double", "integer", "int", "logical",
+  "complex", "character", "raw")))
 what <- typeof(what)
 .Internal(readBin(con, what, n, size, signed, swap))


--
Jonathan D. Clayden, Ph.D.
Research Fellow
Radiology and Physics Unit
UCL Institute of Child Health
30 Guilford Street
LONDON  WC1N 1EH
United Kingdom

t | +44 (0)20 7905 2708
f | +44 (0)20 7905 2358
w | www.homepages.ucl.ac.uk/~sejjjd2/
w | www.diffusion-mri.org.uk/people/1

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] readBin() arg check has unnecessary overhead (patch included)

2009-08-12 Thread Jon Clayden
>    > Dear all,
>    > The version of readBin() in R-devel includes a use of match(), through
>    > `%in%`, which can affect its performance significantly. By using
>    > primitives instead of the rather expensive call to match(), I reduce
>    > the time spent inside readBin() by more than 30% in some of my code
>    > (part of the tractor.base package). A simple patch that does this is
>    > given below. This passes "make check-devel" fine, and I don't see that
>    > it could produce unexpected behaviour -- though I may, of course, be
>    > wrong.
>
> actually,  %in%  is liked by programmeRs for its inherent
> robustness combined with "expressiveness" (<-> readability)
> inspite of its potential efficiency loss wrt to  '=='

Oh, absolutely. I like it and use it widely. But my feeling was that
in core code, a small loss in expressiveness for a significant
performance improvement is a trade worth making.

> and indeed, your patch fails in one case where the original code works:
>
>  readBin(., NA_character_, ...)
>
> However that case can also be checked explicitly,
> and I will implement the corresponding patch.

Many thanks.

Regards,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] readBin is much slower for raw input than for a file

2007-01-26 Thread Jon Clayden
Dear all,

I'm trying to write an efficient binary file reader for a file type
that is made up of several fields of variable length, and so requires
many small reads. Doing this on the file directly using a sequence of
readBin() calls is a bit too slow for my needs, so I tried buffering
the file into a raw vector and reading from that ("loc" is the
equivalent of the file pointer):

fileSize <- file.info(fileName)$size
connection <- file(fileName, "rb")
bytes <- readBin(connection, "raw", n=fileSize)
loc <- 0
close(connection)

--

# within a custom read function:
if (loc == 0)
data <- readBin(bytes, what, n, size, ...)
else if (loc > 0)
data <- readBin(bytes[-(1:loc)], what, n, size, ...)

However, this method runs almost 10 times slower for me than the
sequence of file reads did. The initial call to readBin() - for
reading in the file - is very quick, but running Rprof shows that the
vast majority of the run time in doing the full parse is spent in
readBin, so it does seem to be that that's slowing things down. Can
anyone shed any light on why this is?

I'm not expecting miracles here - and I realise that writing the whole
read routine in C would be much quicker - but surely reading from a
raw vector should work out faster than reading from a file? The system
is R-2.4.1/Linux, Xeon 3.2 GHz, 2 GiB RAM; typical file size is 44
KiB.

Thanks in advance,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R 2.7.0, match() and strings containing \0 - bug?

2008-04-28 Thread Jon Clayden
Hi,

A piece of my code that uses readBin() to read a certain file type is
behaving strangely with R 2.7.0. This seems to be because of a failure
to match() strings after using rawToChar() when the original was
terminated with a "\0" character. Direct equality testing with ==
still works as expected. I can reproduce this as follows:

> x <- "foo"
> y <- c(charToRaw("foo"),as.raw(0))
> z <- rawToChar(y)
> z==x
[1] TRUE
> z=="foo"
[1] TRUE
> z %in% c("foo","bar")
[1] FALSE
> z %in% c("foo","bar","foo\0")
[1] FALSE

But without the nul character it works fine:

> zz <- rawToChar(charToRaw("foo"))
> zz %in% c("foo","bar")
[1] TRUE

I don't see anything about this in the latest NEWS, but is this
expected behaviour? Or is it, as I suspect, a bug? This seems to be
new to R 2.7.0, as I said.

Regards,
Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R 2.7.0, match() and strings containing \0 - bug?

2008-04-28 Thread Jon Clayden
Apologies for missing out the sessionInfo():

R version 2.7.0 (2008-04-22)
i386-apple-darwin8.10.1

locale:
en_GB.UTF-8/en_US.UTF-8/C/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base


2008/4/28 Jon Clayden <[EMAIL PROTECTED]>:
> Hi,
>
>  A piece of my code that uses readBin() to read a certain file type is
>  behaving strangely with R 2.7.0. This seems to be because of a failure
>  to match() strings after using rawToChar() when the original was
>  terminated with a "\0" character. Direct equality testing with ==
>  still works as expected. I can reproduce this as follows:
>
>  > x <- "foo"
>  > y <- c(charToRaw("foo"),as.raw(0))
>  > z <- rawToChar(y)
>  > z==x
>  [1] TRUE
>  > z=="foo"
>  [1] TRUE
>  > z %in% c("foo","bar")
>  [1] FALSE
>  > z %in% c("foo","bar","foo\0")
>  [1] FALSE
>
>  But without the nul character it works fine:
>
>  > zz <- rawToChar(charToRaw("foo"))
>  > zz %in% c("foo","bar")
>  [1] TRUE
>
>  I don't see anything about this in the latest NEWS, but is this
>  expected behaviour? Or is it, as I suspect, a bug? This seems to be
>  new to R 2.7.0, as I said.
>
>  Regards,
>  Jon
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R 2.7.0, match() and strings containing \0 - bug?

2008-04-28 Thread Jon Clayden
2008/4/28 Prof Brian Ripley <[EMAIL PROTECTED]>:
>
> On Mon, 28 Apr 2008, Jon Clayden wrote:
>
>
> > Hi,
> >
> > A piece of my code that uses readBin() to read a certain file type is
> > behaving strangely with R 2.7.0. This seems to be because of a failure
> > to match() strings after using rawToChar() when the original was
> > terminated with a "\0" character. Direct equality testing with ==
> > still works as expected. I can reproduce this as follows:
> >
> >
> > > x <- "foo"
> > > y <- c(charToRaw("foo"),as.raw(0))
> > > z <- rawToChar(y)
> > > z==x
> > >
> > [1] TRUE
> >
> > > z=="foo"
> > >
> > [1] TRUE
> >
> > > z %in% c("foo","bar")
> > >
> > [1] FALSE
> >
> > > z %in% c("foo","bar","foo\0")
> > >
> > [1] FALSE
> >
> > But without the nul character it works fine:
> >
> >
> > > zz <- rawToChar(charToRaw("foo"))
> > > zz %in% c("foo","bar")
> > >
> > [1] TRUE
> >
> > I don't see anything about this in the latest NEWS, but is this
> > expected behaviour? Or is it, as I suspect, a bug? This seems to be
> > new to R 2.7.0, as I said.
> >
>
>  And so is the comment in ?match:
>
>  Character inputs with embedded nul bytes will be truncated at the
>  first nul.
>
>  The bug is in the documentation here -- this was intentional.
>
>  As support for embedded nuls in character strings is being removed in R
> 2.8.0, you should not rely on this.
>

Thanks for the reply, but I don't see why this should make the match
fail. If "foo\0" gets truncated to "foo", then surely there's no
question that match("foo\0","foo") should produce "1" (which it does
if you use the literals, but not if it came out of rawToChar)?

Also, ?'==' seems to contain a similar comment:

 When comparisons are made between character strings, parts of the
 strings after embedded 'nul' characters are ignored.

So why are the results different? I would expect 'z=="foo"' and 'z
%in% "foo"' to both return TRUE, but the second returns FALSE.

Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R --interactive and readline() creates infinite loop

2008-09-25 Thread Jon Clayden

Dear all,

I have asked before, in R-help [1], about a way to create an  
interactive session in which commands are taken from a file or  
standard input - like R CMD BATCH but additionally allowing user input  
- but there was no response to that question, and the workarounds that  
I have found (using "expect", creating a temporary .Rprofile) are ugly  
and problematic.


With the appearance of the --interactive flag in R 2.7.0 I thought  
this might become possible, but it not only does not behave as I would  
expect, it appears to go into an infinite loop, and uses 100% CPU  
until killed.


$ echo 'print(readline("Input:"))' | R --no-save --quiet
> print(readline("Input:"))
Input:[1] ""
>
[no interactivity]

$ echo 'print(readline("Input:"))' | R --no-save --quiet --interactive
[no response at all]

This behaviour remains in the latest alphas of R 2.8.0. My platform is  
Mac OS X.5.5 on Intel Core 2 Duo.


I assume, given this outcome, that this is not the intended use of -- 
interactive, but I still wonder if there is any way to achieve an  
interactive session based on a predefined set of commands without  
writing a completely new front-end (overkill, surely?).


Any guidance would be appreciated.

Regards,
Jon

[1] http://finzi.psych.upenn.edu/R/Rhelp02a/archive/117412.html


--
Jonathan D. Clayden, Ph.D.
Research Fellow
Radiology and Physics Unit
UCL Institute of Child Health
30 Guilford Street
LONDON  WC1N 1EH
United Kingdom

t | +44 (0)20 7905 2708
f | +44 (0)20 7905 2358
w | www.homepages.ucl.ac.uk/~sejjjd2/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R --interactive and readline() creates infinite loop

2008-09-25 Thread Jon Clayden
Many thanks for the suggestions.

The use case I had in mind is more along the lines of what Peter
Dalgaard mentioned, and his solution looks ideal. I know in advance
exactly what code I want to run, but some of that code may require
user input.

Jon

2008/9/25 John Chambers <[EMAIL PROTECTED]>:
> My application, at least, wanted to show (my class) individual commands
> from the file and then optionally insert some typed commands before
> going on to the next part of the source file. As far as I can see, the
> piped shell command approach will have to treat the whole file at one time.
>
> John
>
> Peter Dalgaard wrote:
>> One canonical way of doing it is:
>>
>> (echo 'print(readline("Input:"))'; cat -) | R --interactive --no-save
>>
>> (you don't want to leave out --no-save )
>>
>>
>> John Chambers wrote:
>>
>>> For an alternative approach to your original goal, take a look at
>>> demoSource() in the SoDA package from CRAN.  It's a bit tedious to set
>>> up (see the Details section of the help file) but uses standard R
>>> sessions to mix lines from a demo file and interactive input.
>>>
>>> John
>>>
>>> Jon Clayden wrote:
>>>
>>>> Dear all,
>>>>
>>>> I have asked before, in R-help [1], about a way to create an
>>>> interactive session in which commands are taken from a file or
>>>> standard input - like R CMD BATCH but additionally allowing user
>>>> input - but there was no response to that question, and the
>>>> workarounds that I have found (using "expect", creating a temporary
>>>> .Rprofile) are ugly and problematic.
>>>>
>>>> With the appearance of the --interactive flag in R 2.7.0 I thought
>>>> this might become possible, but it not only does not behave as I
>>>> would expect, it appears to go into an infinite loop, and uses 100%
>>>> CPU until killed.
>>>>
>>>> $ echo 'print(readline("Input:"))' | R --no-save --quiet
>>>>
>>>>> print(readline("Input:"))
>>>>>
>>>> Input:[1] ""
>>>>
>>>> [no interactivity]
>>>>
>>>> $ echo 'print(readline("Input:"))' | R --no-save --quiet --interactive
>>>> [no response at all]
>>>>
>>>> This behaviour remains in the latest alphas of R 2.8.0. My platform
>>>> is Mac OS X.5.5 on Intel Core 2 Duo.
>>>>
>>>> I assume, given this outcome, that this is not the intended use of
>>>> --interactive, but I still wonder if there is any way to achieve an
>>>> interactive session based on a predefined set of commands without
>>>> writing a completely new front-end (overkill, surely?).
>>>>
>>>> Any guidance would be appreciated.
>>>>
>>>> Regards,
>>>> Jon
>>>>
>>>> [1] http://finzi.psych.upenn.edu/R/Rhelp02a/archive/117412.html
>>>>
>>>>
>>>> --
>>>> Jonathan D. Clayden, Ph.D.
>>>> Research Fellow
>>>> Radiology and Physics Unit
>>>> UCL Institute of Child Health
>>>> 30 Guilford Street
>>>> LONDON  WC1N 1EH
>>>> United Kingdom
>>>>
>>>> t | +44 (0)20 7905 2708
>>>> f | +44 (0)20 7905 2358
>>>> w | www.homepages.ucl.ac.uk/~sejjjd2/
>>>>
>>>> __
>>>> R-devel@r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>>
>>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>>
>>
>>
>>
>
>[[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R --interactive and readline() creates infinite loop

2008-09-26 Thread Jon Clayden
2008/9/25 Peter Dalgaard <[EMAIL PROTECTED]>:
> John Chambers wrote:
>>
>> My application, at least, wanted to show (my class) individual commands
>> from the file and then optionally insert some typed commands before going on
>> to the next part of the source file. As far as I can see, the piped shell
>> command approach will have to treat the whole file at one time.
>
> Hmmno... You can "cat -" multiple times if you want, terminating each with
> ctrl-D. That's not to say that it is the optimal solution though. Echoing
> the non-keyboard input seems a bit tricky, for instance.

Yes, looking more into it, the fact that everything gets echoed this
way is less than ideal. And for some reason, presumably to do with the
details of the behaviour of "cat", if I do

(echo 'print(readline("Input:"));q()'; cat -) | R --interactive --no-save

I need an extra newline after giving the input before I get returned
to the shell, even though R seems to have quit already. So I don't
think this is the answer. The "demoSource" approach is interesting,
but I think running two R sessions is also going to be problematic for
my application.

It really would be nice if R could do this kind of thing itself. I
would have thought it would make creating some simple "pass-through"
front-ends (like mine ;) ) very easy. I have very little knowledge of
the R source though, so I acknowledge that it may not be an easy task.

Jon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Capturing all warnings (with messages)

2009-02-04 Thread Jon Clayden

Dear all,

For an open-source project that I'm working on (1), which uses R for  
all its heavy lifting but includes a wrapper shell script, I was  
hoping to find a way to capture all warnings (and, in fact, errors  
too), and handle them in my own way. I realise I can do this for a  
single expression using something like:


> f <- function(w) print(w$message)
> withCallingHandlers(warning("Test"),warning=f)
[1] "Test"
Warning message:
In withCallingHandlers(warning("Test"), warning = f) : Test

But I would like to capture all warnings, globally. The  
"warning.expression" option doesn't seem to allow an argument, and I  
can't seem to use "last.warning" to get at the message either:


> g <- function() print(last.warning$message)
> options(warning.expression=quote(g()))
> warning("Test2")
NULL

Could anyone tell me whether there's a way to do this, please? An old  
thread on this topic seemed to go unresolved (2), and I've skimmed  
RNEWS and I don't see anything about this since then.


> sessionInfo()
R version 2.8.1 (2008-12-22)
i386-apple-darwin8.11.1

locale:
en_GB.UTF-8/en_US.UTF-8/C/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  splines   methods
[8] base

other attached packages:
[1] tractor.session_1.0.0   tractor.base_1.0.3  tractor.nt_1.0.2

loaded via a namespace (and not attached):
[1] tools_2.8.1

Regards,
Jon


(1) http://code.google.com/p/tractor/
(2) http://finzi.psych.upenn.edu/R/Rhelp02/archive/61872.html


--
Jonathan D. Clayden, Ph.D.
Research Fellow
Radiology and Physics Unit
UCL Institute of Child Health
30 Guilford Street
LONDON  WC1N 1EH
United Kingdom

t | +44 (0)20 7905 2708
f | +44 (0)20 7905 2358
w | www.homepages.ucl.ac.uk/~sejjjd2/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Capturing all warnings (with messages)

2009-02-04 Thread Jon Clayden
Dear Jeff,

Many thanks for the suggestion, but I don't think tryCatch() is the
answer, mainly because it causes code to stop executing when a warning
is signalled:

> f <- function(w) print(w$message)
> tryCatch({warning("Test"); print(3)},warning=f)
[1] "Test"

(The "print(3)" call is not run.) In this regard,
withCallingHandlers() is preferable. But either way it is untidy, at
best, to routinely put code inside one of these functions when I know
I always want to handle warnings my way. If I could set an appropriate
option when a package is loaded, or in an .Rprofile, then I should
never need to worry about whether a bit of code might generate
warnings and so should be wrapped.

Regards,
Jon

2009/2/4 Jeffrey Horner :
> Jon Clayden wrote on 02/04/2009 06:59 AM:
>>
>> Dear all,
>>
>> For an open-source project that I'm working on (1), which uses R for all
>> its heavy lifting but includes a wrapper shell script, I was hoping to find
>> a way to capture all warnings (and, in fact, errors too), and handle them in
>> my own way. I realise I can do this for a single expression using something
>> like:
>>
>>  > f <- function(w) print(w$message)
>>  > withCallingHandlers(warning("Test"),warning=f)
>> [1] "Test"
>> Warning message:
>> In withCallingHandlers(warning("Test"), warning = f) : Test
>>
>> But I would like to capture all warnings, globally. The
>> "warning.expression" option doesn't seem to allow an argument, and I can't
>> seem to use "last.warning" to get at the message either:
>>
>>  > g <- function() print(last.warning$message)
>>  > options(warning.expression=quote(g()))
>>  > warning("Test2")
>> NULL
>>
>> Could anyone tell me whether there's a way to do this, please? An old
>> thread on this topic seemed to go unresolved (2), and I've skimmed RNEWS and
>> I don't see anything about this since then.
>
> In fact, the thread did have the answer: tryCatch(). The help page is a bear
> to read and comprehend, but if you do invest the time it should convince you
> that you will want to use it. I find that I have to read and reread many
> sections of R documentation before I can reconcile what I want to know with
> what the authors are trying to tell me.
>
> I don't comprehend everything about R's condition system, but let me see if
> I can convince you that you need tryCatch() to do what you want.
>
> Consider:
>
> x <- function() warning("warning message")
> y <- function() call_unknown_fun()
> z <- function() message('message message')
>
> Each of these functions signal conditions of a particular condition class:
> simpleWarning, simpleError, and simpleMessage, respectively.
>
> w <- function(e) str(e)
>
> I'm going to use w  to trap the simpleWarning condition:
>
>> tryCatch(x(),simpleWarning=w)
> List of 2
>  $ message: chr "warning message"
>  $ call   : language x()
>  - attr(*, "class")= chr [1:3] "simpleWarning" "warning" "condition"
>
> So tryCatch returned a list with two elements, the message and the call that
> signaled the condition. In fact the list is actually an S3 object of class
> simpleWarning, which inherits from warning and condition. Reading the help
> page for tryCatch(), I can actually do this:
>
>> tryCatch(x(),condition=w)
> List of 2
>  $ message: chr "warning message"
>  $ call   : language x()
>  - attr(*, "class")= chr [1:3] "simpleWarning" "warning" "condition"
>
> since simpleWarning inherits from condition. And in fact I can use the
> condition class to trap everything I want.
>
>> tryCatch(y(),condition=w)
> List of 2
>  $ message: chr "could not find function \"call_unknown_fun\""
>  $ call   : language y()
>  - attr(*, "class")= chr [1:3] "simpleError" "error" "condition"
>
>> tryCatch(z(),condition=w)
> List of 2
>  $ message: chr "message message\n"
>  $ call   : language message("message message")
>  - attr(*, "class")= chr [1:3] "condition" "message" "simpleMessage"
>
> (Side note: is the class hierarchy actually correct for this simpleMessage
> object?)
>
> So in summary, wrap every R expression you want to run within a single
> tryCatch() call and trap all conditions with one handler for the abstract
> class named 'condition'. I think that's what you want...
>
> Jeff
>

Re: [Rd] Capturing all warnings (with messages)

2009-02-04 Thread Jon Clayden
Jeff, Hadley,

Many thanks for your responses. The eval.with.details package sounds
interesting and I'll certainly take a closer look, but it still seems
to me that these approaches are focussed on trapping warnings within
specific snippets of code rather than changing the way all warnings
(including those in standard packages) are reported.

This ability would surely be useful anytime that you wish to change
the reporting of warnings from the default. Say, for example, that you
wanted to include a timestamp with each warning message. You'd do it,
I would expect, by writing a function that checks the time and formats
the message appropriately. This is the kind of thing I'm after -- I
hope this clarifies things a bit more.

The warn.expression option *appears* to provide a way to do what I
want, but because the warning is not passed to the expression (or so
it seems), and last.warning is not set before the expression is
evaluated, the expression can only know that *some* warning condition
has been raised, not *which* one. Perhaps there is a reason that
last.warning cannot be set first (?), but this limits the usefulness
of the option.

Jon

2009/2/4 hadley wickham :
> Hi Jon,
>
> I have an in-development package that attempts to do this.  It's
> called eval.with.details and is available from
> http://github.com/hadley/eval.with.details.  As you might guess, it's
> a version of eval that captures all details like messages, warnings,
> errors and output so you can do whatever you want with them.  It
> captures them in the way Jeff Horner describes - but there are a lot
> of fiddly details to get right.
>
> Unfortunately there isn't any documentation yet, but the majority of
> what you're interested in is present in eval.r.  The code has been
> fairly well tested - I'm using it in my own implementation of a sweave
> like system.
>
> Hadley
>
> On Wed, Feb 4, 2009 at 6:59 AM, Jon Clayden  wrote:
>> Dear all,
>>
>> For an open-source project that I'm working on (1), which uses R for all its
>> heavy lifting but includes a wrapper shell script, I was hoping to find a
>> way to capture all warnings (and, in fact, errors too), and handle them in
>> my own way. I realise I can do this for a single expression using something
>> like:
>>
>>> f <- function(w) print(w$message)
>>> withCallingHandlers(warning("Test"),warning=f)
>> [1] "Test"
>> Warning message:
>> In withCallingHandlers(warning("Test"), warning = f) : Test
>>
>> But I would like to capture all warnings, globally. The "warning.expression"
>> option doesn't seem to allow an argument, and I can't seem to use
>> "last.warning" to get at the message either:
>>
>>> g <- function() print(last.warning$message)
>>> options(warning.expression=quote(g()))
>>> warning("Test2")
>> NULL
>>
>> Could anyone tell me whether there's a way to do this, please? An old thread
>> on this topic seemed to go unresolved (2), and I've skimmed RNEWS and I
>> don't see anything about this since then.
>>
>>> sessionInfo()
>> R version 2.8.1 (2008-12-22)
>> i386-apple-darwin8.11.1
>>
>> locale:
>> en_GB.UTF-8/en_US.UTF-8/C/C/en_GB.UTF-8/en_GB.UTF-8
>>
>> attached base packages:
>> [1] stats graphics  grDevices utils datasets  splines   methods
>> [8] base
>>
>> other attached packages:
>> [1] tractor.session_1.0.0   tractor.base_1.0.3  tractor.nt_1.0.2
>>
>> loaded via a namespace (and not attached):
>> [1] tools_2.8.1
>>
>> Regards,
>> Jon
>>
>>
>> (1) http://code.google.com/p/tractor/
>> (2) http://finzi.psych.upenn.edu/R/Rhelp02/archive/61872.html
>>
>>
>> --
>> Jonathan D. Clayden, Ph.D.
>> Research Fellow
>> Radiology and Physics Unit
>> UCL Institute of Child Health
>> 30 Guilford Street
>> LONDON  WC1N 1EH
>> United Kingdom
>>
>> t | +44 (0)20 7905 2708
>> f | +44 (0)20 7905 2358
>> w | www.homepages.ucl.ac.uk/~sejjjd2/
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>
>
>
> --
> http://had.co.nz/
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel