[Rd] --as-cran error

2012-05-02 Thread Mauricio Zambrano-Bigiarini

Dear List,

While using the --as-cran option for checking one of my packages:

R CMD check --as-cran hydroGOF_0.3-3.tar.gz


I got the following error message:

pkgname <- "hydroGOF"
> source(file.path(R.home("share"), "R", "examples-header.R"))
> options(warn = 1)
> library('hydroGOF')
Error in loadNamespace(i[[1L]], c(lib.loc, .libPaths())) :
  there is no package called ‘class’

However, I don't get any error message when the checking is done without 
the --as-cran option.


Could somebody give me a hint about how to solve this error before 
submission to CRAN ?



sessionInfo()
R version 2.15.0 (2012-03-30)
Platform: x86_64-redhat-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_GB.utf8   LC_NUMERIC=C
 [3] LC_TIME=en_GB.utf8LC_COLLATE=en_GB.utf8
 [5] LC_MONETARY=en_GB.utf8LC_MESSAGES=en_GB.utf8
 [7] LC_PAPER=CLC_NAME=C
 [9] LC_ADDRESS=C  LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.utf8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base


Thanks in advance,

Mauricio Zambrano-Bigiarini
--

Water Resources Unit
Institute for Environment and Sustainability (IES)
Joint Research Centre (JRC), European Commission
webinfo: http://floods.jrc.ec.europa.eu/

DISCLAIMER:
"The views expressed are purely those of the writer
and may not in any circumstances be regarded as sta-
ting an official position of the European Commission"

Linux user #454569 -- Ubuntu user #17469

"A strong man and a waterfall always
channel their own path." (Unknown)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] --as-cran error

2012-05-02 Thread Duncan Murdoch

On 12-05-02 6:06 AM, Mauricio Zambrano-Bigiarini wrote:

Dear List,

While using the --as-cran option for checking one of my packages:

R CMD check --as-cran hydroGOF_0.3-3.tar.gz


I got the following error message:

pkgname<- "hydroGOF"
  >  source(file.path(R.home("share"), "R", "examples-header.R"))
  >  options(warn = 1)
  >  library('hydroGOF')
Error in loadNamespace(i[[1L]], c(lib.loc, .libPaths())) :
there is no package called ‘class’

However, I don't get any error message when the checking is done without
the --as-cran option.

Could somebody give me a hint about how to solve this error before
submission to CRAN ?



There was a bug in 2.15.0:  if your package used a package that used a 
recommended package (where "used" means listed as Depends, Suggests, 
Imports...), then you could get this error.


R-patched has this fixed, so you could update to that.  If you don't 
want to update, the workaround is to list "class" explicitly as a 
dependency of your package.


Duncan Murdoch




sessionInfo()
R version 2.15.0 (2012-03-30)
Platform: x86_64-redhat-linux-gnu (64-bit)

locale:
   [1] LC_CTYPE=en_GB.utf8   LC_NUMERIC=C
   [3] LC_TIME=en_GB.utf8LC_COLLATE=en_GB.utf8
   [5] LC_MONETARY=en_GB.utf8LC_MESSAGES=en_GB.utf8
   [7] LC_PAPER=CLC_NAME=C
   [9] LC_ADDRESS=C  LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_GB.utf8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base


Thanks in advance,

Mauricio Zambrano-Bigiarini


__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Decompressing raw vectors in memory

2012-05-02 Thread Hadley Wickham
Hi all,

I'm struggling to decompress a gzip'd raw vector in memory:

content <- readBin("http://httpbin.org/gzip";, "raw", 1000)

memDecompress(content, type = "gzip")
# Error in memDecompress(content, type = "gzip") :
#  internal error -3 in memDecompress(2)

I'm reasonably certain that the file is correctly compressed, because
if I save it out to a file, I can read the uncompressed data:

tmp <- tempfile()
writeBin(content, tmp)
readLines(tmp)

So that suggests I'm using memDecompress incorrectly.  Any hints?

Thanks!

Hadley

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] The constant part of the log-likelihood in StructTS

2012-05-02 Thread Ravi Varadhan
Comparing such disparate, non-nested models can be quite problematic.  I am not 
sure what AIC/BIC comparisons mean in such cases.  The issue of different 
constants should be the least of your worries.

Ravi

-Original Message-
From: r-devel-boun...@r-project.org [mailto:r-devel-boun...@r-project.org] On 
Behalf Of Jouni Helske
Sent: Tuesday, May 01, 2012 2:17 PM
To: r-devel@r-project.org
Subject: Re: [Rd] The constant part of the log-likelihood in StructTS

Ok, it seems that R's AIC and BIC functions warn about different constants, so 
that's probably enough. The constants are not irrelevant though, if you compute 
the log-likelihood of one model using StructTS, and then fit alternative model 
using other functions such as arima(), which do take account the constant term, 
and use those loglikelihoods for computing for example BIC, you get wrong 
results when checking which model gives lower BIC value. Hadn't though about it 
before, have to be more careful in future when checking results from different 
packages etc.

Jouni


On Tue, May 1, 2012 at 4:51 PM, Ravi Varadhan  wrote:

> This is not a problem at all.  The log likelihood function is a 
> function of the model parameters and the data, but it is defined up to 
> an additive arbitrary constant, i.e. L(\theta) and L(\theta) + k are 
> completely equivalent, for any k. This does not affect model 
> comparisons or hypothesis tests.
>
> Ravi
> 
> From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on 
> behalf of Jouni Helske [jounihel...@gmail.com]
> Sent: Monday, April 30, 2012 7:37 AM
> To: r-devel@r-project.org
> Subject: [Rd] The constant part of the log-likelihood in StructTS
>
> Dear all,
>
> I'd like to discuss about a possible bug in function StructTS of stats 
> package. It seems that the function returns wrong value of the 
> log-likelihood, as the added constant to the relevant part of the 
> log-likelihood is misspecified. Here is an simple example:
>
> > data(Nile)
> > fit <- StructTS(Nile, type = "level") fit$loglik
> [1] -367.5194
>
> When computing the log-likelihood with other packages such as KFAS and 
> FKF, the loglikelihood value is around -645.
>
> For the local level model, the likelihood is defined by 
> -0.5*n*log(2*pi) -
> 0.5*sum(log(F_t) + v_t^2/sqrt(F_t)) (see for example  Durbin and 
> Koopman (2001, page 30). But in StructTS, the likelihood is computed like 
> this:
>
> loglik <- -length(y) * res$value + length(y) * log(2 * pi),
>
> where the first part coincides with the last part of the definition, 
> but the constant part has wrong sign and it is not multiplied by 0.5. 
> Also in case of missing observations, I think there should be 
> sum(!is.na(y)) instead of length(y) in the constant term, as the 
> likelihood is only computed for those y which are observed.
>
> This does not affect in estimation of model parameters, but it could 
> have effects in model comparison or some other cases.
>
> Is there some reason for this kind of constant, or is it just a bug?
>
> Best regards,
>
> Jouni Helske
> PhD student in Statistics
> University of Jyväskylä
> Finland
>
> [[alternative HTML version deleted]]

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Prof Brian Ripley

On 02/05/2012 14:24, Hadley Wickham wrote:

Hi all,

I'm struggling to decompress a gzip'd raw vector in memory:

content<- readBin("http://httpbin.org/gzip";, "raw", 1000)

memDecompress(content, type = "gzip")
# Error in memDecompress(content, type = "gzip") :
#  internal error -3 in memDecompress(2)

I'm reasonably certain that the file is correctly compressed, because
if I save it out to a file, I can read the uncompressed data:

tmp<- tempfile()
writeBin(content, tmp)
readLines(tmp)

So that suggests I'm using memDecompress incorrectly.  Any hints?


Headers.


Thanks!

Hadley




--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] The constant part of the log-likelihood in StructTS

2012-05-02 Thread Mark Leeds
Hi Ravi: As far as I know ( well , really read ) and Bert et al can say
more , the AIC is not dependent on the models being nested as long as the
sample sizes used are the same when comparing. In some cases, say comparing
MA(2), AR(1), you have to be careful with sample size usage but there is no
nesting requirement for AIC atleast, I'm pretty sure.

So, Jouni's worry I think should be the different likelihoods. Jouni:
There are ways of re-writing ARIMA as STRUCta type models  which might be
easier than trying to consistentitize the likelihoods across different
packages/base. StructTS is really a specific DLM as far as I understand it
so you may be better off going to the DLM package. The DLM likelihoods
still will not necessarily be consistent with arima likelihoods..But there
are ways of transforming arimas so that they can be written as DLM's so
that you can DLM  for those also. My point is that,  if you're comparing
likelihoods of different models, if possible, it's best to use ONE
package/function so that you don't use different likelihoods by accident.


Mark

Also, not sure why this is on R-dev ?








 Mark















On Wed, May 2, 2012 at 11:19 AM, Ravi Varadhan  wrote:

> Comparing such disparate, non-nested models can be quite problematic.  I
> am not sure what AIC/BIC comparisons mean in such cases.  The issue of
> different constants should be the least of your worries.
>
> Ravi
>
> -Original Message-
> From: r-devel-boun...@r-project.org [mailto:r-devel-boun...@r-project.org]
> On Behalf Of Jouni Helske
> Sent: Tuesday, May 01, 2012 2:17 PM
> To: r-devel@r-project.org
> Subject: Re: [Rd] The constant part of the log-likelihood in StructTS
>
> Ok, it seems that R's AIC and BIC functions warn about different
> constants, so that's probably enough. The constants are not irrelevant
> though, if you compute the log-likelihood of one model using StructTS, and
> then fit alternative model using other functions such as arima(), which do
> take account the constant term, and use those loglikelihoods for computing
> for example BIC, you get wrong results when checking which model gives
> lower BIC value. Hadn't though about it before, have to be more careful in
> future when checking results from different packages etc.
>
> Jouni
>
>
> On Tue, May 1, 2012 at 4:51 PM, Ravi Varadhan  wrote:
>
> > This is not a problem at all.  The log likelihood function is a
> > function of the model parameters and the data, but it is defined up to
> > an additive arbitrary constant, i.e. L(\theta) and L(\theta) + k are
> > completely equivalent, for any k. This does not affect model
> > comparisons or hypothesis tests.
> >
> > Ravi
> > 
> > From: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] on
> > behalf of Jouni Helske [jounihel...@gmail.com]
> > Sent: Monday, April 30, 2012 7:37 AM
> > To: r-devel@r-project.org
> > Subject: [Rd] The constant part of the log-likelihood in StructTS
> >
> > Dear all,
> >
> > I'd like to discuss about a possible bug in function StructTS of stats
> > package. It seems that the function returns wrong value of the
> > log-likelihood, as the added constant to the relevant part of the
> > log-likelihood is misspecified. Here is an simple example:
> >
> > > data(Nile)
> > > fit <- StructTS(Nile, type = "level") fit$loglik
> > [1] -367.5194
> >
> > When computing the log-likelihood with other packages such as KFAS and
> > FKF, the loglikelihood value is around -645.
> >
> > For the local level model, the likelihood is defined by
> > -0.5*n*log(2*pi) -
> > 0.5*sum(log(F_t) + v_t^2/sqrt(F_t)) (see for example  Durbin and
> > Koopman (2001, page 30). But in StructTS, the likelihood is computed
> like this:
> >
> > loglik <- -length(y) * res$value + length(y) * log(2 * pi),
> >
> > where the first part coincides with the last part of the definition,
> > but the constant part has wrong sign and it is not multiplied by 0.5.
> > Also in case of missing observations, I think there should be
> > sum(!is.na(y)) instead of length(y) in the constant term, as the
> > likelihood is only computed for those y which are observed.
> >
> > This does not affect in estimation of model parameters, but it could
> > have effects in model comparison or some other cases.
> >
> > Is there some reason for this kind of constant, or is it just a bug?
> >
> > Best regards,
> >
> > Jouni Helske
> > PhD student in Statistics
> > University of Jyväskylä
> > Finland
> >
> > [[alternative HTML version deleted]]
>
>[[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Hadley Wickham
>> I'm struggling to decompress a gzip'd raw vector in memory:
>>
>> content<- readBin("http://httpbin.org/gzip";, "raw", 1000)
>>
>> memDecompress(content, type = "gzip")
>> # Error in memDecompress(content, type = "gzip") :
>> #  internal error -3 in memDecompress(2)
>>
>> I'm reasonably certain that the file is correctly compressed, because
>> if I save it out to a file, I can read the uncompressed data:
>>
>> tmp<- tempfile()
>> writeBin(content, tmp)
>> readLines(tmp)
>>
>> So that suggests I'm using memDecompress incorrectly.  Any hints?
>
> Headers.

Looking at http://tools.ietf.org/html/rfc1952:

* the first two bytes are id1 and id2, which are 1f 8b as expected

* the third byte is the compression: deflate (as.integer(content[3]))

* the fourth byte is the flag

  rawToBits(content[4])
  [1] 00 00 00 00 00 00 00 00

  which indicates no extra header fields are present

So the header looks ok to me (with my limited knowledge of gzip)

Stripping off the header doesn't seem to help either:

memDecompress(content[-(1:10)], type = "gzip")
# Error in memDecompress(content[-(1:10)], type = "gzip") :
#  internal error -3 in memDecompress(2)

I've read the help for memDecompress but I don't see anything there to help me.

Any more hints?

Thanks!

Hadley

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Prof Brian Ripley

On 02/05/2012 16:43, Hadley Wickham wrote:

I'm struggling to decompress a gzip'd raw vector in memory:

content<- readBin("http://httpbin.org/gzip";, "raw", 1000)

memDecompress(content, type = "gzip")
# Error in memDecompress(content, type = "gzip") :
#  internal error -3 in memDecompress(2)

I'm reasonably certain that the file is correctly compressed, because
if I save it out to a file, I can read the uncompressed data:

tmp<- tempfile()
writeBin(content, tmp)
readLines(tmp)

So that suggests I'm using memDecompress incorrectly.  Any hints?


Headers.


Looking at http://tools.ietf.org/html/rfc1952:

* the first two bytes are id1 and id2, which are 1f 8b as expected

* the third byte is the compression: deflate (as.integer(content[3]))

* the fourth byte is the flag

   rawToBits(content[4])
   [1] 00 00 00 00 00 00 00 00

   which indicates no extra header fields are present

So the header looks ok to me (with my limited knowledge of gzip)

Stripping off the header doesn't seem to help either:

memDecompress(content[-(1:10)], type = "gzip")
# Error in memDecompress(content[-(1:10)], type = "gzip") :
#  internal error -3 in memDecompress(2)

I've read the help for memDecompress but I don't see anything there to help me.

Any more hints?


Well, it seems what you get there depends on the client, but I did

tystie% curl -o foo "http://httpbin.org/gzip";
tystie% file foo
foo: gzip compressed data, last modified: Wed May  2 17:06:24 2012, max 
compression


and the final part worried me: I do not know if memDecompress() knows 
about that format.  The help page does not claim it can do anything 
other than de-compress the results of memCompress() (although past 
experience has shown that it can in some cases).  gzfile() supports a 
much wider range of formats.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Hadley Wickham
> Well, it seems what you get there depends on the client, but I did
>
> tystie% curl -o foo "http://httpbin.org/gzip";
> tystie% file foo
> foo: gzip compressed data, last modified: Wed May  2 17:06:24 2012, max
> compression
>
> and the final part worried me: I do not know if memDecompress() knows about
> that format.  The help page does not claim it can do anything other than
> de-compress the results of memCompress() (although past experience has shown
> that it can in some cases).  gzfile() supports a much wider range of
> formats.

Ah, ok.  Thanks.  Then in that case it's probably just as easy to save
it to a temp file and read that.

  con <- file(tmp) # R automatically detects compression
  open(con, "rb")
  on.exit(close(con), TRUE)

  readBin(con, raw(), file.info(tmp)$size * 10)

The only challenge is figuring out what n to give readBin. Is there a
good general strategy for this?  Guess based on the file size and then
iterate until result of readBin has length less than n?

  n <- file.info(tmp)$size * 2
  content <- readBin(con, raw(),  n)
  n_read <- length(content)
  while(n_read == n) {
more <- readBin(con, raw(),  n)
content <- c(content, more)
n_read <- length(more)
  }

Which is not great style, but there shouldn't be many reads.

Hadley


-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Duncan Temple Lang
I understand the desire not to have any dependency on additional
packages, and I have no desire to engage in any "mine's better" exchanges.
So write this just for the record. 
The gzunzip() function handle this.

> library(RCurl); library(Rcompression)
> val = getURLContent("http://httpbin.org/gzip";)
> cat(gunzip(val))
{
  "origin": "24.5.119.171",
  "headers": {
"Content-Length": "",
"Host": "httpbin.org",
"Content-Type": "",
"Connection": "keep-alive",
"Accept": "*/*"
  },
  "gzipped": true,
  "method": "GET"
}


Just FWIW, as I really don't like writing to temporary files,
most so that we might move towards security in R.

   D.


Hadley Wickham wrote:
> > Well, it seems what you get there depends on the client, but I did
> >
> > tystie% curl -o foo "http://httpbin.org/gzip";
> > tystie% file foo
> > foo: gzip compressed data, last modified: Wed May  2 17:06:24 2012, max
> > compression
> >
> > and the final part worried me: I do not know if memDecompress() knows about
> > that format.  The help page does not claim it can do anything other than
> > de-compress the results of memCompress() (although past experience has shown
> > that it can in some cases).  gzfile() supports a much wider range of
> > formats.
> 
> Ah, ok.  Thanks.  Then in that case it's probably just as easy to save
> it to a temp file and read that.
> 
>   con <- file(tmp) # R automatically detects compression
>   open(con, "rb")
>   on.exit(close(con), TRUE)
> 
>   readBin(con, raw(), file.info(tmp)$size * 10)
> 
> The only challenge is figuring out what n to give readBin. Is there a
> good general strategy for this?  Guess based on the file size and then
> iterate until result of readBin has length less than n?
> 
>   n <- file.info(tmp)$size * 2
>   content <- readBin(con, raw(),  n)
>   n_read <- length(content)
>   while(n_read == n) {
> more <- readBin(con, raw(),  n)
> content <- c(content, more)
> n_read <- length(more)
>   }
> 
> Which is not great style, but there shouldn't be many reads.
> 
> Hadley
> 
> 
> -- 
> Assistant Professor / Dobelman Family Junior Chair
> Department of Statistics / Rice University
> http://had.co.nz/
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


pgpsYKaVXFV3w.pgp
Description: PGP signature
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Decompressing raw vectors in memory

2012-05-02 Thread Hadley Wickham
> I understand the desire not to have any dependency on additional
> packages, and I have no desire to engage in any "mine's better" exchanges.
> So write this just for the record.
> The gzunzip() function handle this.

Funnily enough I just discovered that RCurl already handles this: you
just need to set encoding = "gzip".  No extra dependencies, and yours
is better ;)

Hadley

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Problem using RBloomberg blpConnect

2012-05-02 Thread wuffmeister
I am using StatET/Eclipse successfully, but RBloomberg does not want to play
ball:

> conn <- blpConnect(log.level="finest")
R version 2.14.2 (2012-02-29) 
rJava Version 0.9-3 
RBloomberg Version 0.4-151 
Java environment initialized successfully.
Looking for most recent blpapi3.jar file...
Adding C:\blp\API\APIv3\JavaAPI\v3.4.6.6\lib\blpapi3.jar to Java classpath
Error in .jcall("RJavaTools", "Z", "classHasField", x, name, static) : 
  RcallMethod: cannot determine object class

Anyone got a clue about this one? Can't seem to find the same error message
elsewhere.

--
View this message in context: 
http://r.789695.n4.nabble.com/Problem-using-RBloomberg-blpConnect-tp4603615.html
Sent from the R devel mailing list archive at Nabble.com.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel