[Rd] incomplete make clean for grDevices ( Windows only) (PR#8137)

2005-09-16 Thread thomas . petzoldt
Full_Name: Thomas Petzoldt
Version: R 2.2.0 alpha
OS: Windows
Submission from: (NULL) (141.30.20.2)


Symptom:

If one moves a source tree to another drive letter, a following compile will
fail when compiling grDevices.

The bug is found on Windows only.

Reason:

When performing a "make clean" for the complete installation, several files (in
particular *.d are not cleaned up.

Suggested solution: 

modify Makefile.win that "clean" deletes *.rd (and possibly some others??)

clean:
$(RM) $(DLLNAME).dll *.a $(OBJS) $(DLLNAME).def grDevices_res.rc *.d

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R optim(method="L-BFGS-B"): unexpected behavior when working with parent environments

2019-05-06 Thread Thomas Petzoldt
It seems that it's an old bug that was found in some other packages, but 
at that time not optim:


https://bugs.r-project.org/bugzilla/show_bug.cgi?id=15958

and that Duncan Murdoch posted a patch already last Friday :)

Thomas

Am 06.05.2019 um 16:40 schrieb Ben Bolker:

   That's consistent/not surprising if the problem lies in the numerical
gradient calculation step ...

On 2019-05-06 10:06 a.m., Ravi Varadhan wrote:

Optim's Nelder-Mead works correctly for this example.



optim(par=10, fn=fn, method="Nelder-Mead")

x=10, ret=100.02 (memory)
x=11, ret=121 (calculate)
x=9, ret=81 (calculate)
x=8, ret=64 (calculate)
x=6, ret=36 (calculate)
x=4, ret=16 (calculate)
x=0, ret=0 (calculate)
x=-4, ret=16 (calculate)
x=-4, ret=16 (memory)
x=2, ret=4 (calculate)
x=-2, ret=4 (calculate)
x=1, ret=1 (calculate)
x=-1, ret=1 (calculate)
x=0.5, ret=0.25 (calculate)
x=-0.5, ret=0.25 (calculate)
x=0.25, ret=0.0625 (calculate)
x=-0.25, ret=0.0625 (calculate)
x=0.125, ret=0.015625 (calculate)
x=-0.125, ret=0.015625 (calculate)
x=0.0625, ret=0.00390625 (calculate)
x=-0.0625, ret=0.00390625 (calculate)
x=0.03125, ret=0.0009765625 (calculate)
x=-0.03125, ret=0.0009765625 (calculate)
x=0.015625, ret=0.0002441406 (calculate)
x=-0.015625, ret=0.0002441406 (calculate)
x=0.0078125, ret=6.103516e-05 (calculate)
x=-0.0078125, ret=6.103516e-05 (calculate)
x=0.00390625, ret=1.525879e-05 (calculate)
x=-0.00390625, ret=1.525879e-05 (calculate)
x=0.001953125, ret=3.814697e-06 (calculate)
x=-0.001953125, ret=3.814697e-06 (calculate)
x=0.0009765625, ret=9.536743e-07 (calculate)
$par
[1] 0

$value
[1] 0

$counts
function gradient
   32   NA

$convergence
[1] 0

$message
NULL





From: R-devel  on behalf of Duncan Murdoch 

Sent: Friday, May 3, 2019 8:18:44 AM
To: peter dalgaard
Cc: Florian Gerber; r-devel@r-project.org
Subject: Re: [Rd] R optim(method="L-BFGS-B"): unexpected behavior when working 
with parent environments


It looks as though this happens when calculating numerical gradients:  x
is reduced by eps, and fn is called; then x is increased by eps, and fn
is called again.  No check is made that x has other references after the
first call to fn.

I'll put together a patch if nobody else gets there first...

Duncan Murdoch

On 03/05/2019 7:13 a.m., peter dalgaard wrote:

Yes, I think you are right. I was at first confused by the fact that after the 
optim() call,


environment(fn)$xx

[1] 10

environment(fn)$ret

[1] 100.02

so not 9.999, but this could come from x being assigned the final value without 
calling fn.

-pd



On 3 May 2019, at 11:58 , Duncan Murdoch  wrote:

Your results below make it look like a bug in optim():  it is not duplicating a 
value when it should, so changes to x affect xx as well.

Duncan Murdoch

On 03/05/2019 4:41 a.m., Serguei Sokol wrote:

On 03/05/2019 10:31, Serguei Sokol wrote:

On 02/05/2019 21:35, Florian Gerber wrote:

Dear all,

when using optim() for a function that uses the parent environment, I
see the following unexpected behavior:

makeFn <- function(){
   xx <- ret <- NA
   fn <- function(x){
  if(!is.na(xx) && x==xx){
  cat("x=", xx, ", ret=", ret, " (memory)", fill=TRUE, sep="")
  return(ret)
  }
  xx <<- x; ret <<- sum(x^2)
  cat("x=", xx, ", ret=", ret, " (calculate)", fill=TRUE, sep="")
  ret
   }
   fn
}
fn <- makeFn()
optim(par=10, fn=fn, method="L-BFGS-B")
# x=10, ret=100 (calculate)
# x=10.001, ret=100.02 (calculate)
# x=9.999, ret=100.02 (memory)
# $par
# [1] 10
#
# $value
# [1] 100
# (...)

I would expect that optim() does more than 3 function evaluations and
that the optimization converges to 0.

Same problem with optim(par=10, fn=fn, method="BFGS").

Any ideas?

I don't have an answer but may be an insight. For some mysterious
reason xx is getting changed when in should not. Consider:

fn=local({n=0; xx=ret=NA; function(x) {n <<- n+1; cat(n, "in

x,xx,ret=", x, xx, ret, "\n"); if (!is.na(xx) && x==xx) ret else {xx
<<- x; ret <<- x**2; cat("out x,xx,ret=", x, xx, ret, "\n"); ret}}})

optim(par=10, fn=fn, method="L-BFGS-B")

1 in x,xx,ret= 10 NA NA
out x,xx,ret= 10 10 100
2 in x,xx,ret= 10.001 10 100
out x,xx,ret= 10.001 10.001 100.02
3 in x,xx,ret= 9.999 9.999 100.02
$par
[1] 10

$value
[1] 100

$counts
function gradient
 11

$convergence
[1] 0

$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"

At the third call, xx has value 9.999 while it should have kept the
value 10.001.


A little follow-up: if you untie the link between xx and x by replacing
the expression "xx <<- x" by "xx <<- x+0" it works as expected:
   > fn=local({n=0; xx=ret=NA; function(x) {n <<- n+1; cat(n, "in
x,xx,ret=", x, xx, ret, "\n"); if (!is.na(xx) && x==xx) ret else {xx <<-
x+0; ret <<- x**2; cat("out x,xx,ret=", x, xx, ret, "\n"); ret}}})
   > optim(par=10, fn=fn, method="L-BFGS-B")
1 in x,xx,ret= 10 NA NA
out x,xx,ret= 10 10 100
2 in x,xx,ret= 10.001

[Rd] package test failed on Solaris x86 -- help needed for debugging

2010-09-16 Thread Thomas Petzoldt

Dear R developers,

we have currently a 'mysterious' test problem with one package that 
successfully passed the tests on all platforms, with the only exception 
of Solaris x86 where obviously one of our help examples breaks the CRAN 
test.


As we don't own such a machine I want to ask about a possibility to run 
a few tests on such a system:


r-patched-solaris-x86

An even more recent version of R on the same OS (Solaris 10) and with 
the same compiler (Sun Studio 12u1) would help also.


Any assistance is appreciated


Thomas Petzoldt


--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] package test failed on Solaris x86 -- help needed for debugging

2010-09-16 Thread Thomas Petzoldt

On 16.09.2010 17:05, Martyn Plummer wrote:

Dear Thomas,

Is this the deSolve package?

http://www.r-project.org/nosvn/R.check/r-patched-solaris-x86/deSolve-00check.html

I can help you with that. It does pass R CMD check on my OpenSolaris
installation, but I am getting some compiler warnings. I will send you
details.

Martyn


You are right and there are many reasons what can be wrong, i.e. an 
obsolete comma in the example, the call to colorRampPalette after the 
ode.2D call or any problem with the C code. I wonder why this problem is 
so specific because it runs on all other eleven platforms including 
Solaris / Sparc.


Details about the compiler warnings are welcome.

Thomas

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Summary: package test failed on Solaris x86 ...

2010-09-17 Thread Thomas Petzoldt

Dear Martin,

many thanks for your effort. Last night we found the same, thanks to the 
kind assistance from Bill Dunlap. The most important bugs are now 
already fixed, some minor things and an upload of a new version will 
follow soon.


Many thanks for the quick and competent assistance to Bill Dunlap, 
Matthew Doyle and you (Martyn Plummer). I've also setup a new Linux test 
system, so that next time valgrind checks can be performed before 
package upload.


Thank you!

Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Matrix install fails because of defunct save in require

2010-09-17 Thread Thomas Petzoldt

Dear R-Devel,

I've just tried to compile the fresh R-devel and found that the install 
of package Matrix failed:


-
** help
*** installing help indices
** building package indices ...
Error in require(Matrix, save = FALSE) :
  unused argument(s) (save = FALSE)
ERROR: installing package indices failed
-


possible reason: Matrix/data/*.R

News.Rd says:

The \code{save} argument of \code{require()} is defunct.


Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Matrix install fails because of defunct save in require

2010-09-17 Thread Thomas Petzoldt

On 17.09.2010 19:22, Uwe Ligges wrote:



On 17.09.2010 16:04, Thomas Petzoldt wrote:

Dear R-Devel,

I've just tried to compile the fresh R-devel and found that the install
of package Matrix failed:

-
** help
*** installing help indices
** building package indices ...
Error in require(Matrix, save = FALSE) :
unused argument(s) (save = FALSE)
ERROR: installing package indices failed
-



Have you got the Matrix package from the appropriate 2.12/recommended
repository or installed via

make rsync-recommended
make recommended


>


In that case it works for me.

Uwe


Yes, I did it this way, but did you use svn version before 52932 or a 
version equal or newer than 52940?


The svn log shows that in the meantime Brian Ripley added a workaround:

Revision: 52940
Author: ripley
Date: 19:31:48, Freitag, 17. September 2010
Message:
keep dummy require(save=FALSE) for now

Modified : /trunk/doc/NEWS.Rd
Modified : /trunk/src/library/base/R/library.R
Modified : /trunk/src/library/base/man/library.Rd


Is solved for now.

Thanks, Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Matrix install fails because of defunct save in require

2010-09-17 Thread Thomas Petzoldt

On 17.09.2010 20:04, Prof Brian Ripley wrote:

I'm not sure why end users would be using R-devel rather than R-alpha at
this point, but I have already changed R-devel to allow Matrix to get
updated before it fails.


Yes I realized the update and successfully recompiled it. Many thanks.

"End users" or package developers want to keep own packages compatible 
with future versions, so maintaining svn syncs is much more efficient 
than downloading snapshoots. In the current case it would have been much 
easier for me, of course, to go back to an older svn release (as I 
sometimes do). However, I felt to be responsible for reporting issues as 
contribution to the open source development process.


O.K., I'll wait a little bit longer in the future and many thanks for 
developing this great software.


ThPe

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] CRAN packages maintained by you

2016-09-02 Thread Thomas Petzoldt

Hi,

I have the same problem and, at a first look, the issues reported by the 
CRAN checks seemed easy to fix. However, after checking it again locally 
and on http://win-builder.r-project.org it appeared that GCC 4.9.3 
(Windows, Rtools 3.4), same also on win-builder reports even more 
issues, especially legacy Fortran (mainly Roger's #2 and #3), but also


"warning: ISO C forbids conversion of object pointer to function pointer 
type"


The latter results from using pointers returned by R_ExternalPtrAddr() 
for calling user-defined functions in DLLs, cf. the following thread 
from the very beginning: 
https://stat.ethz.ch/pipermail/r-devel/2004-September/030792.html


What is now expected to do?

1. Is it really the intention to start a complete rewrite of all legacy 
Fortran code?


2. Is there now a better way for calling user functions than 
R_ExternalPtrAddr()?



Many thanks for clarification,

Thomas


Am 28.08.2016 um 23:48 schrieb Roger Koenker:

Hi Kurt,

I have started to look into this, and I need some guidance about how to
prioritize my repairs.  There are basically 4 categories of warnings from
gfortran’s pedantic critique of my packages:

1.  Some errant tab characters it doesn’t like,
2.  Too many or too few continue statements
3.  Horrible (and obsolescent) arithmetic and computed gotos
4.  undeclared doubles and dubious conversions

The last category seems relatively easy to fix and is potentially
important, but the others seem more difficult to fix and altogether
less important.  The goto issues are all in code that has been written
long ago by others and imported, e.g. Peyton and Ng’s cholesky.f.
I’m very reluctant to mess with any of those gotos.  The fact that
they were declared obsolete long ago doesn’t mean that gfortran
has any intention of not supporting these constructs in the future,
does it?

Before devoting more time and energy, which is in short supply
lately, I like to hear what others are thinking/doing about all this,
so I’ll copy this to r-devel.

All the best,
Roger

url:www.econ.uiuc.edu/~rogerRoger Koenker
emailrkoen...@uiuc.eduDepartment of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Urbana, IL 61801



On Aug 28, 2016, at 2:36 AM, Kurt Hornik  wrote:


Dear maintainers,

This concerns the CRAN packages





Using gfortran with options -Wall -pedantic to compile your package
Fortran code finds important problems, see your package check pages for
more information.

Can you please fix these problems as quickly as possible?

Best
-k


__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel




--
Dr. Thomas Petzoldt
Technische Universitaet Dresden
Faculty of Environmental Sciences
Institute of Hydrobiology
01062 Dresden, Germany

Tel.: +49 351 463 34954
Fax:  +49 351 463 37108
E-Mail: thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

-- limnology and ecological modelling --

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Re: [Rd] CRAN packages maintained by you

2016-09-02 Thread Thomas Petzoldt

Am 02.09.2016 um 16:02 schrieb Dirk Eddelbuettel:


On 2 September 2016 at 14:54, Thomas Petzoldt wrote:
| Hi,
|
| I have the same problem and, at a first look, the issues reported by the
| CRAN checks seemed easy to fix. However, after checking it again locally
| and on http://win-builder.r-project.org it appeared that GCC 4.9.3
| (Windows, Rtools 3.4), same also on win-builder reports even more
| issues, especially legacy Fortran (mainly Roger's #2 and #3), but also
|
| "warning: ISO C forbids conversion of object pointer to function pointer
| type"
|
| The latter results from using pointers returned by R_ExternalPtrAddr()
| for calling user-defined functions in DLLs, cf. the following thread
| from the very beginning:
| https://stat.ethz.ch/pipermail/r-devel/2004-September/030792.html
|
| What is now expected to do?
|
| 1. Is it really the intention to start a complete rewrite of all legacy
| Fortran code?
|
| 2. Is there now a better way for calling user functions than
| R_ExternalPtrAddr()?

See this commit (where I apologize for referring to GitHub as the
non-canonical source, but it presents things in pretty enough manner) by
Brian Ripley just a few days ago:

  
https://github.com/wch/r-source/commit/a528a69b98d3e763c39cfabf9b4a9e398651177c

So R 3.4.0 will have R_MakeExternalPtrFn() and R_ExternalPtrAddrFn().


Thank you very much for this hint, sounds very promising! I was indeed 
looking for something like this in the R docs+sources, but didn't expect 
that it is that hot. Now I found it now also in the canonical svn 
sources ;)


I am little bit concerned, how fast this should be forced by CRAN 
because of back-compatibility, and if compiler derivatives are worth the 
effort for this ...


Remains issue #1 with "Obsolescent features" of legacy Fortran. While 
updating my Fedora test system, it seems that there are many other 
packages around that use this sort of old-style, and well tested (!!!) 
Fortran ...


Thomas

[...]


| Am 28.08.2016 um 23:48 schrieb Roger Koenker:
| > Hi Kurt,
| >
| > I have started to look into this, and I need some guidance about how to
| > prioritize my repairs.  There are basically 4 categories of warnings from
| > gfortran’s pedantic critique of my packages:
| >
| >  1.  Some errant tab characters it doesn’t like,
| >  2.  Too many or too few continue statements
| >  3.  Horrible (and obsolescent) arithmetic and computed gotos
| >  4.  undeclared doubles and dubious conversions
| >
| > The last category seems relatively easy to fix and is potentially
| > important, but the others seem more difficult to fix and altogether
| > less important.  The goto issues are all in code that has been written
| > long ago by others and imported, e.g. Peyton and Ng’s cholesky.f.
| > I’m very reluctant to mess with any of those gotos.  The fact that
| > they were declared obsolete long ago doesn’t mean that gfortran
| > has any intention of not supporting these constructs in the future,
| > does it?
| >
| > Before devoting more time and energy, which is in short supply
| > lately, I like to hear what others are thinking/doing about all this,
| > so I’ll copy this to r-devel.
| >
| > All the best,
| > Roger
| >
| > url:www.econ.uiuc.edu/~rogerRoger Koenker
| > emailrkoen...@uiuc.eduDepartment of Economics
| > vox: 217-333-4558University of Illinois
| > fax:   217-244-6678Urbana, IL 61801
| >
| >
| >> On Aug 28, 2016, at 2:36 AM, Kurt Hornik  wrote:
| >>
| >>
| >> Dear maintainers,
| >>
| >> This concerns the CRAN packages
| >
| > 
| >>
| >> Using gfortran with options -Wall -pedantic to compile your package
| >> Fortran code finds important problems, see your package check pages for
| >> more information.
| >>
| >> Can you please fix these problems as quickly as possible?
| >>
| >> Best
| >> -k
| >
| > __
| > R-devel@r-project.org mailing list
| > https://stat.ethz.ch/mailman/listinfo/r-devel
| >



--
Thomas Petzoldt
thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

[Rd] numerical issue in contour.default?

2013-09-13 Thread Thomas Petzoldt

Dear R developers,

I found a small issue while plotting contours of data containing both
"usual" and "very small" numbers. It appeared with both R 3.0.1 and
R-Devel on Windows, and I could reproduce it on Linux. Would it be
possible to solve this before the upcoming release?

Thanks a lot for developing this great software!

Thomas


Example:



set.seed(357)
z1 <- matrix(runif(100, -1e-180, 1e-180), nrow = 10)
contour(z1)# ok

z2 <- matrix(c(runif(50, -1, 1), runif(50, -1e-180, 1e-180)), nrow = 10)
contour(z2)   # Error in contour.default(z) : k != 2 or 4

contour(z2 * 1e20)# 20 worked, 19 produced error
contour(round(z2, 179))   # rounding to 179 digits works but not 180



sessionInfo()

R Under development (unstable) (2013-09-11 r63910)
Platform: x86_64-w64-mingw32/x64 (64-bit)

locale:
[1] LC_COLLATE=German_Germany.1252  LC_CTYPE=German_Germany.1252
[3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C
[5] LC_TIME=German_Germany.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base





--
Dr. Thomas Petzoldt
Technische Universitaet Dresden
Faculty of Environmental Sciences
Institute of Hydrobiology
01062 Dresden, Germany

E-Mail: thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] numerical issue in contour.default?

2013-09-13 Thread Thomas Petzoldt

On 13.09.2013 16:44, Prof Brian Ripley wrote:

On 13/09/2013 15:14, Duncan Murdoch wrote:

On 13/09/2013 10:01 AM, Thomas Petzoldt wrote:

Dear R developers,

I found a small issue while plotting contours of data containing both
"usual" and "very small" numbers. It appeared with both R 3.0.1 and
R-Devel on Windows, and I could reproduce it on Linux. Would it be
possible to solve this before the upcoming release?


I don't see the error in 32 bits, but I do see it in 64 bits.  I think
it's really unlikely this will be fixed before 3.0.2, unless you send a
well tested patch in the next few days.  Code freeze is on Wednesday.


You are right, I can reproduce it only on 64 bit.


And not even then: we would not have time to do sufficiently extensive
checking.


Agreed, so I'll put a workaround in my package for now.


Reporting to bugs.r-project.org with a patch would get the process rolling.


O.K., I will report it. After a look in the sources I would guess that 
it may be in:


src/main/contour-common.h

static int ctr_intersect(double z0, double z1, double zc, double *f)
{
if ((z0 - zc) * (z1 - zc) < 0.0) {
*f = (zc - z0) / (z1 -  z0);
return 1;
}
return 0;
}


... but you are right, too many things depend on it.

Many thanks for the immediate feedback!

Thomas

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] unexpected behavior of <<- in nlm (lazy?)

2014-09-02 Thread Thomas Petzoldt

Hi,

while working with closures in deSolve we have found unexpected behavior
in R 3.1.0 and R 3.1.1 and R devel, while it was still as expected in R
3.0.3. This behavior occurs also in several functions of other packages
like nls.lm from minpack.lm and even nlm in base, while some other
functions worked as expected.  See example below.

The symptom is that super assignments (<<-) of unmodified variables lead
to "references" instead of copies.

Thomas Petzoldt and Karline Soetaert



## --
## Unexpected behavior:

# R version 3.1.0 (2014-04-10) -- "Spring Dance"
# Platform: x86_64-redhat-linux-gnu (64-bit)
#
## AND
#
# R version 3.1.1 (2014-07-10) -- "Sock it to Me"
# Platform: x86_64-w64-mingw32/x64 (64-bit)
#
## AND
#
# R Under development (unstable) (2014-08-31 r66504) --
# "Unsuffered Consequences"
# Platform: i386-w64-mingw32/i386 (32-bit)


f <- function(x, a) {
cat(x[1], x.old[1], x.old1[1], x.old1 == x.old, "\n")
x.old   <<- x  # 'reference'
x.old1  <<- x * 1  # copy
res <- sum((x-a)^2)
attr(res, "gradient") <- 2*(x-a)
res
}

x.old <- x.old1 <- 0
A <- nlm(f, c(10,10), a = c(3,5))

10 0 0 TRUE
10 10 10 TRUE TRUE
10.1 10.1 10 FALSE TRUE
10 10 10.1 FALSE FALSE
-4 -4 10 FALSE FALSE
3 3 -4 FALSE FALSE


## --
## Expected behavior:
# R version 3.0.3 (2014-03-06) -- "Warm Puppy"
# Platform: x86_64-w64-mingw32/x64 (64-bit)


f <- function(x, a) {
cat(x[1], x.old[1], x.old1[1], x.old1 == x.old, "\n")
x.old   <<- x  # 'reference'
x.old1  <<- x * 1  # copy
res <- sum((x-a)^2)
attr(res, "gradient") <- 2*(x-a)
res
}

x.old <- x.old1 <- 0
A <- nlm(f, c(10,10), a = c(3,5))

10 0 0 TRUE
10 10 10 TRUE TRUE
10.1 10 10 TRUE TRUE
10 10.1 10.1 TRUE TRUE
-4 10 10 TRUE TRUE
3 -4 -4 TRUE TRUE

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R 3.0 in newsticker of German computer magazine c't

2013-04-05 Thread Thomas Petzoldt

Hi,

just a short notice for the record that the long vectors of R 3.0 have 
been recognized by the news ticker of the leading German computer 
magazine publisher 'Heise Verlag':


http://www.heise.de/newsticker/meldung/Programmiersprache-R-3-0-fuehrt-Lang-Vektoren-ein-1835822.html


Thanks to you all for your great work!


Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Linux distribution with gcc 4.8 and AddressSanitizer ?

2013-04-18 Thread Thomas Petzoldt

Dear R developers,

I've got an information from Prof. Ripley regarding a bug found with 
AdressSanitizer in one of our packages. It is now fixed, thank you for 
this information.


Now, I would like to run AddressSanitizer myself before submitting the 
patched package to CRAN.


Is there a recommendation of a suitable Linux distribution with gcc 4.8, 
ideally an ISO image or (even better) a virtual appliance for VMware or 
VirtalBox? My Debian Wheezy machines have only 4.7.2.


Thank you

Thomas Petzoldt


--
Dr. Thomas Petzoldt
Technische Universitaet Dresden
Faculty of Environmental Sciences
Institute of Hydrobiology
01062 Dresden, Germany

E-Mail: thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Linux distribution with gcc 4.8 and AddressSanitizer ?

2013-04-19 Thread Thomas Petzoldt

On 18.04.2013 18:05, José Matos wrote:

On Thursday 18 April 2013 17:38:06 Thomas Petzoldt wrote:

Dear R developers,

I've got an information from Prof. Ripley regarding a bug found
with AdressSanitizer in one of our packages. It is now fixed, thank
you for this information.

Now, I would like to run AddressSanitizer myself before submitting
the patched package to CRAN.

Is there a recommendation of a suitable Linux distribution with gcc
4.8, ideally an ISO image or (even better) a virtual appliance for
VMware or VirtalBox? My Debian Wheezy machines have only 4.7.2.

Thank you

Thomas Petzoldt


I am not sure about all the requisites above (regarding the virtual
appliances although I know that they are available) but Fedora 19
(Alpha) that will be released today has gcc 4.8.

Even although it has the Alpha moniker, and the corresponding stage,
it is relatively stable and thus suitable for your requirements.

Regards,



Thank you for the hint to use Fedora 19 Alpha. I have it now running,
together with R 3.0.0. and gcc 4.8.0 20120412 (Red Hat 4.8.0-2).

Compilation and installation of packages (without ASAN) workes out of 
the box.


Then I've set:

export PKG_CFLAGS="-fsanitize=address -fno-omit-frame-pointer"

... and compilation runs and I see that gcc uses the flags, but package
installation still fails:

** testing if installed package can be loaded
Error in dyn.load(file, DLLpath = DLLpath, ...) :
  unable to load shared object
'/home/user/packages/deSolve.Rcheck/deSolve/libs/deSolve.so':
  /home/user/packages/deSolve.Rcheck/deSolve/libs/deSolve.so: undefined
symbol: __asan_report_load8
Error: loading failed
Execution halted
ERROR: loading failed


I see that the address sanitizer cannot work yet (__asan_report_load8)
and that I missed something important, but what?

Thomas Petzoldt



--
Thomas Petzoldt
Technische Universitaet Dresden
Faculty of Environmental Sciences
Institute of Hydrobiology
01062 Dresden, Germany

E-Mail: thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Linux distribution with gcc 4.8 and AddressSanitizer -- solved

2013-04-20 Thread Thomas Petzoldt

It works!

After some hours of compilation, reading the docs and testing,
I got it now working and was able to reproduce (and fix) the
reported error message.

Then ingredients of the successful AdressSanitizer (ASAN)
system were:

- Fedora 19 Alpha RC4 with gcc 4.8 on VirtualBox,
- manual installation of several additional libraries
  especially libasan-devel,
- setting of Makevars and a few environment variables,
- compilation of R-devel (2013-04-19) with address-sanitizer
  (and --enable-strict barrier)
  ==> the compilation of R itself went through
  without problems so that R runs without crash.

Finally:
- compilation and ASAN check of the affected package
  that reproduced the error message.
- bugfix and successful final test.


Maybe this was not the most parsimonious approach ;-)
but using a suitable self-compiled R seems to be unavoidable.



Again, many thanks for your help and the great R system!

Thomas P.



--
Thomas Petzoldt
Technische Universitaet Dresden
Faculty of Environmental Sciences
Institute of Hydrobiology
01062 Dresden, Germany

E-Mail: thomas.petzo...@tu-dresden.de
http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] OOP performance, was: V2.9.0 changes

2009-07-02 Thread Thomas Petzoldt

Hi Troy,

first of all a question, what kind of ecosystem models are you
developing in R? Differential equations or individual-based?

Your write that you are a frustrated Java developer in R. I have a
similar experience, however I still like JAVA, and I'm now more happy
with R as it is much more efficient (i.e. sum(programming + runtime))
for the things I usually do: ecological data analysis and modelling.

After using functional R quite a time and Java in parallel
I had the same idea, to make R more JAVA like and to model ecosystems in
an object oriented manner. At that time I took a look into R.oo (thanks
Henrik Bengtssson) and was one of the Co-authors of proto. I still think
that R.oo is very good and that proto is a cool idea, but finally I
switched to the recommended S4 for my ecological simulation package.

Note also, that my solution was *not* to model the ecosystems as objects
(habitat - populations- individuals), but instead to model ecological
models (equations, inputs, parameters, time steps, outputs, ...).

This works quite well with S4. A speed test (see useR!2006 poster on
http://simecol.r-forge.r-project.org/) showed that all OOP flavours had
quite comparable performance.

The only thing I have to have in mind are a few rules:

- avoid unnecessary copying of large objects. Sometimes it helps to
prefer matrices over data frames.

- use vectorization. This means for an individual-based model that one
has to re-think how to model an individual: not "many [S4] objects"
like in JAVA, but R structures (arrays, lists, data frames) where
vectorized functions (e.g. arithmetics or subset) can work with.

- avoid interpolation (i.e. approx) and if unavoidable, minimize the tables.

If all these things do not help, I write core functions in C (others use
Fortran). This can be done in a mixed style and even a full C to C
communication is possible (see the deSolve documentation how to do this
with differential equation models).


Thomas P.



--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] speedup approxfun (code suggestion)

2009-09-24 Thread Thomas Petzoldt

Dear R developers,

this e-mail concerns a code suggestion to slightly change approxfun, so 
that it is more efficient when called several times.


We are using approxfun (not approx) to update external variables 
(time-series) to current time points while integrating our ODE models.


It is not uncommon for these models to take in the order of 10^4 time 
steps, and sometimes we run a model > 1000 times, so approxfun is called 
many-many times. For such applications approxfun is a serious 
performance bottleneck, so we first tried to develop own functions with 
less overhead.


Then, one of us (Karline Soetart) noticed, that in the C-code of 
R_approx, each time approxfun is called, it checks the input data for 
NAs in the x and y, as well as the validity of the method and f.


For common data sets this takes about 40% of the computation time.

While testing is of course necessary for "approx", we think that for 
"approxfun", testing could be done only once, before the function is 
created.


Testing validity of the input only once, makes approxfun about 40-45% 
faster, e.g. for:


x <- seq(0, 1)
y <- x*2

F1<- approxfun(x, y)

system.time(
 for ( i in 1:1)
  F1(i)
)

5.50 sec for the original approxfun
2.97 sec for the patched version


This is of course just a suggestion, but we think that a patch for 
package "stats" may be of general interest and therefore more 
appropriate than providing a modified approxfun in our own package.


The code suggestion was tested with R-devel rev. 49803 and several 
contributed packages and we found no negative side effects. Please find 
the suggested patch below and please excuse if we overlooked something.


Thanks for your consideration

Best wishes,

Karline Soetaert & Thomas Petzoldt


Index: R/approx.R
===
--- R/approx.R  (revision 49817)
+++ R/approx.R  (working copy)
@@ -114,10 +114,19 @@
 force(f)
 stopifnot(length(yleft) == 1, length(yright) == 1, length(f) == 1)
 rm(o, rule, ties)
-function(v) .C("R_approx", as.double(x), as.double(y), as.integer(n),
-  xout = as.double(v), as.integer(length(v)),
-  as.integer(method), as.double(yleft), as.double(yright),
-  as.double(f), NAOK = TRUE, PACKAGE = "stats")$xout
+
+## Changed here:
+## suggestion:
+# 1. Test input consistency once
+.C("R_approxtest",as.double(x), as.double(y), as.integer(n),
+as.integer(method), as.double(f), NAOK = TRUE,
+PACKAGE = "stats")
+
+# 2. Create and return function that does not test input validity...
+function(v) .C("R_approxfun", as.double(x), as.double(y), as.integer(n),
+xout = as.double(v), as.integer(length(v)), as.integer(method),
+as.double(yleft), as.double(yright), as.double(f), NAOK = TRUE,
+PACKAGE = "stats")$xout
 }
 
 ### This is a `variant' of  approx( method = "constant" ) :
Index: src/approx.c
===
--- src/approx.c(revision 49789)
+++ src/approx.c(working copy)
@@ -128,3 +128,44 @@
if(!ISNA(xout[i]))
xout[i] = approx1(xout[i], x, y, *nxy, &M);
 }
+
+/* Testing done only once - in a separate function */
+void R_approxtest(double *x, double *y, int *nxy,
+ int *method, double *f)
+{
+int i;
+
+switch(*method) {
+ case 1: /* linear */
+   break;
+ case 2: /* constant */
+ if(!R_FINITE(*f) || *f < 0 || *f > 1)
+ error(_("approx(): invalid f value"));
+ break;
+ default:
+ error(_("approx(): invalid interpolation method"));
+ break;
+   }
+/* check interpolation method */
+  for(i=0 ; i<*nxy ; i++)
+ if(ISNA(x[i]) || ISNA(y[i]))
+ error(_("approx(): attempted to interpolate NA values"));
+}
+
+/* R Frontend for Linear and Constant Interpolation, no testing */
+
+void R_approxfun(double *x, double *y, int *nxy, double *xout, int *nout,
+ int *method, double *yleft, double *yright, double *f)
+{
+int i;
+appr_meth M = {0.0, 0.0, 0.0, 0.0, 0}; /* -Wall */
+
+M.f2 = *f;
+M.f1 = 1 - *f;
+M.kind = *method;
+M.ylow = *yleft;
+M.yhigh = *yright;
+for(i=0 ; i < *nout; i++)
+ if(!ISNA(xout[i]))
+xout[i] = approx1(xout[i], x, y, *nxy, &M);
+}
Index: src/init.c
===
--- src/init.c  (revision 49789)
+++ src/init.c  (working copy)
@@ -67,6 +67,8 @@
 static R_NativePrimitiveArgType band_den_bin_t[] = {INTSXP, INTSXP, REALSXP, 
REALSXP, INTSXP};
 
 static R_NativePrimitiveArgType R_approx_t[] = {REALSXP, REALSXP, INTSXP, 
REALSXP, INTSXP, INTSXP, REALSXP, REALSXP, REALSXP};
+static R_Nat

[Rd] linking to package directories broken in R >= 2.10 beta

2009-10-17 Thread Thomas Petzoldt

Dear R developers,

some of our packages come with additional programming examples in a 
directory called "/examples" which is created from "/inst/examples".


This directory is linked from the docs (e.g. in inst/doc/index.html):


examples:
Source code of examples


Given, that we have a package "foo" this is resolved to:

file:///C:/Programme/R/R-2.9.2/library/foo/examples/

with R <= 2.9.2. With R 2.10 beta (R-beta_2009-10-16_r50118.tar.gz) and 
R-devel (svn rev. 50118) we get:


http://127.0.0.1:26383/library/foo/examples/

This is fine, but in contrast to older versions (<= 2.9.2) no automatic 
index is created for the linked directory, so we now get:


"URL /library/foo/examples/ was not found"

bu linking to *individual files* (e.g. examples/example.R) works as 
expected. We can, of course, add manually maintained index files but I 
would much prefer if a default index would be created for the directory 
if no index.html is found.


I very much enjoy the new help system and would be even more happy if 
that issue could be fixed.


Thomas Petzoldt


PS: A minimal reproducible example (foo_1.0.tar.gz) can be provided by 
mail if required.


--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] linking to package directories broken in R >= 2.10 beta

2009-10-17 Thread Thomas Petzoldt

Duncan Murdoch wrote:

Thomas Petzoldt wrote:


[...]

This is fine, but in contrast to older versions (<= 2.9.2) no 
automatic index is created for the linked directory, so we now get:



"URL /library/foo/examples/ was not found"

but linking to *individual files* (e.g. examples/example.R) works as
expected. We can, of course, add manually maintained index files
but I would much prefer if a default index would be created for the
directory if no index.html is found.



By "index" in R <= 2.9.2, you mean the default directory listing 
produced by the web server, rather than something produced by R, 
right?


Yes, I mean the default directory listing produced by (most) web servers.

The R server does that now if the directory is named "doc", but not 
for an arbitrary path. We are concerned about security: any user on 
your system who can guess your port number can access your help 
system, so we want to be sure that such users can't access private 
files.



Hmm, I see and have some tendency to understand that this may be an 
issue for certain multi-user systems. Looking into the svn log (and 
compiling R) it appears that the remaining possibilities where also 
regarded as security issue and are now locked down too.


Well, I'm not yet completely convinced that this was a good idea.

1) It does not completely solve security issues; what is so different
between the library/foo/doc and library/foo/examples ???

2) The change will introduce additional work for package authors
that used internal links within their packages. I can, of course,
reorganize everything below doc, e.g. /library/foo/doc/examples ... but
this means that these things are even more hidden.

3) However, according to the changed R-Exts, it was obviously decided
that this was necessary, so *I* will do the required reorganization.

I hope that other package authors accept this change of the rules too.

Nevertheless, thank you very much for the new help system.

Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] linking to package directories broken in R >= 2.10 beta

2009-10-18 Thread Thomas Petzoldt

Duncan Murdoch wrote:

[...]

The doc directory is known to be visible.  It might surprise someone if 
arbitrary directories were visible, and readable by any user.



2) The change will introduce additional work for package authors
that used internal links within their packages. I can, of course,
reorganize everything below doc, e.g. /library/foo/doc/examples ... but
this means that these things are even more hidden.


Why would someone know to look in .../examples?  Just update whatever 
hint you gave them to look there, and tell them to look in 
.../doc/examples instead.  I don't think it's likely that most people 
would discover either directory without a hint somewhere.  If they were 
looking for examples, they'd look in the documented places, the Examples 
section of man pages, or in the vignettes.



3) However, according to the changed R-Exts, it was obviously decided
that this was necessary, so *I* will do the required reorganization.


I think it was not so much a decision that this was necessary, as that 
it was prudent.


Duncan Murdoch


[...]

ok, I will agree, but let me add one final thought: What is with the 
/demo directory?


Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] linking to package directories is NOT broken in R >= 2.10 beta

2009-10-19 Thread Thomas Petzoldt

Prof Brian Ripley wrote:
When you linked to ../examples/ R was not involved, and what you are 
seeing is what your browser did with a file:// url.  Most browsers will 
support a wide range of file types, and list directories: but that is 
not something that was ever (AFAICS) documented to work.


The 'issue' is your expectations when creating your own 
inst/doc/index.html.  The only relative links that are supported by the 
help system are to R package help topics and files, to documentation 
under R.home("doc") and a limited set of files in a package's 'doc' 
directory to support its use for vignettes, including the ability to 
list 'doc' itself (if requested in a particular way).


If links to files under /example worked, it was a bug. Because of 
security concerns over traffic snooping, what you can see through the 
dynamic help system is intentionally very limited.  In fact I suspect 
they worked for you only because


(i) you installed into .Library
(ii) you had a file for which text/plain worked (and that is because 
files that might be in a vignette directory have been checked)..
(iii) you fell into a code branch marked '# should not get here' in 
pre-2.10.0 (but absent in R-devel).


The good news is that if you refer to files under the installed 'doc' 
directory this should work -- subdirectory listings work now in R-devel 
and will probably be ported to 2.10.0 before release.




Many thanks for clarification and the good news, i.e. for allowing html 
links to /doc (and also to DESCRIPTION).


Let me add one additional suggestion: Yes, I know that there are certain 
related functions available (with different semantics), but what about 
allowing html links to "/demo" and to some other special files like NEWS 
and LICENSE (as found in MASS) or THANKS (like in Hmisc)?



Thanks for consideration.

Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Sort order of vignettes in help index and CRAN

2010-02-01 Thread Thomas Petzoldt

Dear developers,

there seems to be an inconsistency in the sort order of package 
vignettes between (1) the "Overview of user guides and package 
vignettes" in the help index of package itself:


FME.pdf
FMEdyna.pdf
FMEmcmc.pdf
FMEother.pdf
FMEsteady.pdf

and (2) the sort order of CRAN:

FMEdyna.pdf
FMEmcmc.pdf
FMEother.pdf
FME.pdf<--
FMEsteady.pdf


It looks like CRAN ignores the dot so that the "p" from pdf appears 
between the "o" and "s" of the other two.


It would be nice if this can be made consistent. I would prefer if CRAN 
would use the \VignetteIndexEntry , but file name ordering (without the 
extension) is also ok.


Thomas Petzoldt




--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] proto and baseenv()

2010-02-25 Thread Thomas Petzoldt

Am 25.02.2010 06:33, wrote Ben:

Wow, thanks for the heads-up.  That is horrible behavior.  But using
baseenv() doesn't seem like the solution either.  I'm new to proto,
but it seems like this is also a big drawback:


z<- 1
proto(baseenv(), expr={a=z})$a

Error in eval(expr, envir, enclos) : object "z" not found




I would say that this behaviour is intentional and not "horrible". proto 
objects do simply the same as ordinary functions in R that have also 
full access to variables and functions at a higher level:


Try the following:

> y <- proto(a=2)
> y$ls()
[1] "a"


ls() is defined in package base and so would even work if you inherit 
from baseenv() so why it is surprising that proto objects (by default) 
inherit objects from other packages and from the user workspace?



Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] S4 initialize or "generating function"

2007-01-30 Thread Thomas Petzoldt
Hello,

apologies if I missed something well known. I'm just revising an own
package and wonder if it is still common to use "generating
functions" which have the same name as the corresponding S4 class as
suggested by Chambers, 2001. "Classes and Methods in the S Language".

-- or should one consequently use new and initialize to do such things?

Thank you

Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 initialize or "generating function"

2007-01-31 Thread Thomas Petzoldt
Dear Seth,

thank you for your suggestions. As I have some (shallow) inheritance I 
found indeed that initialize methods are superior. The reasons are, as 
you wrote, inheritance to subclasses and the possibility to use 
callNextMethod.

My code is now in fact much more compact with initialize in contrast to 
MyClass() constructors ("generating functions").

What I did was to create S4 objects which have an own function  slot 
"initfunc", that is called by the initialize method. In this case also 
the "cloning" behavior of initialize makes sense. It resets the S4 
object, possibly with new random start values, see the simplified example:

setClass("bar", representation(foo="numeric", initfunc="function"))
setMethod("initialize", signature(.Object="bar"),
   function(.Object, ...) {
 .Object <- callNextMethod()
 .Object <- [EMAIL PROTECTED](.Object)
 invisible(.Object)
   }
)

foobar <- new("bar",  foo = 0,
 initfunc  = function(obj) {
   [EMAIL PROTECTED] = rnorm(1)
   invisible(obj)
 }
)
[EMAIL PROTECTED]
foobar <- initialize(foobar)
[EMAIL PROTECTED]

One odd thing I found was that initialize does obviously not allow to 
incorporate additional named parameters which are not slots.

In essence I think that one should not use the constructor approach 
(from 2001) anymore, even if the "call is independent of the details of 
the representation".

Thank you

Thomas


Seth Falcon wrote:
 > Thomas Petzoldt <[EMAIL PROTECTED]> writes:
 >
 >> Hello,
 >>
 >> apologies if I missed something well known. I'm just revising an own
 >> package and wonder if it is still common to use "generating
 >> functions" which have the same name as the corresponding S4 class as
 >> suggested by Chambers, 2001. "Classes and Methods in the S Language".
 >>
 >> -- or should one consequently use new and initialize to do such
 >>things?
 >
 > If you have no inheritence between the classes in your system (and are
 > pretty sure you are not going to have any) then I don't think it
 > really matters whether you define MyClass() to create new instances or
 > use new("MyClass",...).
 >
 > There is considerable advantage to having your constructors be real
 > methods when you do have inheritence relations among your classes.
 > This allows you to make use of callNextMethod to reduce code
 > duplication.
 >
 > Defining an "initialize" method and using new seems to be the standard
 > and I suspect is required if you want to plug into the validObject
 > protocol.  The "initialize" setup tries to make some strange
 > assumptions about how objects should be able to be constructed (IMO) 
[*1*].
 > Furthermore, it is a bummer to lose multiple dispatch when defining
 > constructors.
 >
 > + seth
 >
 >
 > [*1*] The default initialize method interprets named arguments as
 > slots which is a reasonable default, but not always sensible.  What I
 > find quite strange is that an un-named argument is interpreted as an
 > instance that should be cloned.
 >
 >
 >


-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] S4 initialize or "generating function"

2007-01-31 Thread Thomas Petzoldt
Seth Falcon wrote:
 > Thomas Petzoldt <[EMAIL PROTECTED]> writes:
 >> One odd thing I found was that initialize does obviously not allow to
 >> incorporate additional named parameters which are not slots.
 >
 > ?!  Does for me:


Your example works but one can not arbitrarily mix slots, other named 
arguments and the default callNextMethod(). The documentation states:

"...Data to include in the new object. Named arguments correspond to 
slots in the class definition."


Here is my example (extended from yours):

setClass("FOO", representation(x="numeric", y="numeric"))

setMethod("initialize", "FOO",
  function(.Object, y, value) {
   callNextMethod()
  [EMAIL PROTECTED] <- value * 2
  .Object
  })

new("FOO", y=1, value=2)

An object of class "FOO"
Slot "x":
[1] 4

Slot "y":
numeric(0)

What is different from what I originally expected. In such cases one has 
to add the arguments to callNextMethod(y=1) explicitly.

 >> In essence I think that one should not use the constructor approach
 >> (from 2001) anymore, even if the "call is independent of the details
 >> of the representation".
 >
 > Sometimes both are useful.  You can have convenience generator
 > functions for users, but have initialize methods that get called
 > internally.  This can also be nice in that the internal code can be
 > lean, while common user-facing code can do lots of error and sanity
 > checking, etc.

I see, and for the moment I let the constructors in the package, but 
they only provide now some class specific defaults and then call the 
(inherited) initialize method via new.

Thomas

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] package check note: no visible global function definition (in functions using Tcl/Tk)

2007-06-11 Thread Thomas Petzoldt
Hello,

when testing packages under R version 2.6.0 Under development 
(unstable), in order to discover future compatibility issues, I recently 
get numerous "possible problem notes" for different (own and other 
contributed) packages containing Tcl/Tk code,  e.g.:


   * checking R code for possible problems ... NOTE
   sEdit : editVec : build: no visible global function
   definition for 'tclvalue'
   sEdit : editVec : reset: no visible global function
   definition for 'tclvalue<-'
   sEdit : editVec: no visible global function
   definition for 'tktoplevel'


My question:

- Is this an indication of a serious problem?
- How can one avoid (or if not possible suppress) these messages?

Thanks in advance

Thomas P.

OS: WinXP Prof., German
R 2.6.0, svn-rev. 41910, 2007-06-11



-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] package check note: no visible global function definition (in functions using Tcl/Tk)

2007-06-11 Thread Thomas Petzoldt
Dear Prof.Ripley, Dear Seth,

thank you both, including tcltk in Depends as suggested by Prof. Ripley 
immediately helped to silence the tcltk NOTEs, but Seth is also right. 
It is in fact not the ultimate solution for the Suggests case, that I 
intentionally used like in Seth's code example.

Thomas


Seth Falcon wrote:
> Prof Brian Ripley <[EMAIL PROTECTED]> writes:
> 
>> It seems that is happens if package tcltk is missing from the Depends: 
>> list in the DESCRIPTION file.  I just tested with Amelia and homals and 
>> that solved the various warnings in both cases.
> 
> Adding tcltk to Depends may not always be the desried solution.  If
> tcltk is already in Suggests, for example, and the intention is to
> optionally provide GUI features, then the code may be correct as-is.
> That is, codetools will issue the NOTEs if you have a function that
> looks like:
> 
>f <- function() {
>  if (require("tckltk")) {
>  someTckltkFunctionHere()
>  } else
>  otherwiseFunction()
>  }
>}
> 
> There are a number of packages in the BioC repository that provide
> such optional features (not just for tcltk) and it would be nice to
> have a way of declaring the use such that the NOTE is silenced.
> 
> [Note 1: I don't have any ideas at the moment for how this could
> work.]
> 
> [Note 2: Despite the false-positives, I've already caught a handful of
> bugs by reading over these NOTEs and think they provide a lot of value
> to the check process]
> 
> + seth
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] silent option in nested calls to try()

2007-08-27 Thread Thomas Petzoldt
Hello,

is it *really intentional* that the "silent" option of try() does only 
apply to the outer call in nested try constructs? I would assume that a 
silent try() should suppress all error messages regardless where they 
occur, even if they are already handled with other try()'s.

The error message itself should be (and is in both cases) reported by 
the return value of try().

Thanks in advance

Thomas


## Old behavior (tested with R-2.4.1):
 >  try(try(exp(NULL)), silent=TRUE)
 >


## Current behavior (R-2.6.0 unstable, build 42641, WinXP):
 >  try(try(exp(NULL)), silent=TRUE)
Error in exp(NULL) : Non-numeric argument to mathematical function
 >



-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie
01062 Dresden
GERMANY

http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] silent option in nested calls to try()

2007-08-27 Thread Thomas Petzoldt
Dear Luke,

thank you very much for your immediate answer. The problem I see is, 
however, that while one can rewrite ones outer code using tryCatch, one 
may not have control over the use of try in a given inner function.

Thomas



Luke Tierney wrote:
> Yes.  If you want finer control use tryCatch.
> 
> Best,
> 
> luke
> 
> On Mon, 27 Aug 2007, Thomas Petzoldt wrote:
> 
>> Hello,
>>
>> is it *really intentional* that the "silent" option of try() does only
>> apply to the outer call in nested try constructs? I would assume that a
>> silent try() should suppress all error messages regardless where they
>> occur, even if they are already handled with other try()'s.
>>
>> The error message itself should be (and is in both cases) reported by
>> the return value of try().
>>
>> Thanks in advance
>>
>> Thomas
>>
>>
>> ## Old behavior (tested with R-2.4.1):
>> >  try(try(exp(NULL)), silent=TRUE)
>> >
>>
>>
>> ## Current behavior (R-2.6.0 unstable, build 42641, WinXP):
>> >  try(try(exp(NULL)), silent=TRUE)
>> Error in exp(NULL) : Non-numeric argument to mathematical function
>> >
>>
>>
>>
>>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-2.6.0 package check problems

2007-10-05 Thread Thomas Petzoldt
Robin Hankin wrote:
 > Hello
 >
 >
 > One of my packages, untb_1.3-2, passes R CMD check under
 > MacOSX (and apparently the systems used in the package check
 > summary page on CRAN) but fails with the following message on
 > R-2.6.0.tgz compiled last night on my (home) linux box.  I hasten
 > to add that I have never seen this error before on home-compiled
 > pre-releases of R-2.6.0.
 >
 > Can anyone help me understand what is going on?

Hi Robin,

congratulations to your published article about untb ;-)

One possible explanation is that your examples use random numbers which 
may be different ones during the CRAN check. I had this problem with 
another package where a "rare random number event" lead to 
non-convergence of optim during the package check. You may use 
set.seed() as a first aid and then try to stabilize your algorithms.

BTW: untb_1.3-2.tar.gz passed the check just now on my system: R 2.7.0 
Under development (unstable), svn rev 43092 (5. Oct), i386-pc-mingw32


Thomas P.


-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] experiments with slot functions and possible problems NOTE

2008-01-21 Thread Thomas Petzoldt
Hello,

first of all, thanks to LT for \pkg{codeutils}. I agree that it is
indeed very useful to identify errors and also to encourage re-thinking
past solutions. My problem:

I want to compare different sets of related sub-functions which should
be used alternatively by the same top-level function. Sets of related
functions should be bound together (as lists) and the workspace should
be as clean as possible.

Finally, these functions are to be called by top-level functions that
work with such sets.

What's the best way to do this?

- clutter the workspace with lots of functions?
OR:
- ignore "notes about possible problems"
OR:
- a third way?

Thanks in advance

Thomas P.



An example:

##=
## 1) One possible "set of functions"
flistA <- list(
   foo = function() {
1:10
   },
   bar = function() {
 log(foo())
   }
)

## .. we may also have alternative sets,
##e.g. flistB, flistC, ... etc

## 2) Now we try to construct closures

## 2a) non-nested
makefun1 <- function(flist) {
   with(flist,
 function() foo()
   )
}

## 2b) nested call
makefun2 <- function(flist) {
   with(flist,
 function() bar()
   )
}

## 2c) or use an alternative way with a special function
##  addtoenv, suggested by Gabor Grothendieck some times ago:
addtoenv <- function(L, p = parent.frame()) {
   for(nm in names(L)) {
 assign(nm, L[[nm]], p)
 environment(p[[nm]]) <- p
   }
   L
}

makefun3 <- function(flist) {
   addtoenv(flist)
   function() bar()
}

## 3) now we create the "top-level" functions
##with one particular "set of functions"
m1 <- makefun1(flistA)
m2 <- makefun2(flistA)
m3 <- makefun3(flistA)

m1()
## this was no problem, trivial

m2()
# Error in bar() : could not find function "foo"

m3()
# works, but even in that case we get problems
# if we do this in a package:

# * checking R code for possible problems ... NOTE
# bar: no visible global function definition for 'foo'

## tested with R version 2.6.1 and
## R 2.7.0 Under development, svn rev. 44061





-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie
01062 Dresden
GERMANY

http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] experiments with slot functions and possible problems NOTE

2008-01-21 Thread Thomas Petzoldt
Hello Duncan,

thank you very much for your prompt reply. When I interpret your answer
correctly there seems to be no alternative than either:

A) using lots of (possibly private) functions in the package or,
B) define dummies for all functions which are in such lists or,
C) ignore the NOTE, knowing that it is spurious (BTW: there are several
prominent packages on CRAN with unresolved NOTEs).

The problem with A is, that it is my intention to have different
versions of particular functions which should be organized as consistent
sets. On the other hand, version B) is similarly inelegant as it
requires lots of "obsolete code". I already had this idea but found it
awkward because it would look more like Pascal where one has separate
interface and implementation.

Duncan Murdoch wrote:
> On 1/21/2008 8:30 AM, Thomas Petzoldt wrote:
>> Hello,
>>
>> first of all, thanks to LT for \pkg{codeutils}. I agree that it is
>> indeed very useful to identify errors and also to encourage re-thinking
>> past solutions. My problem:
>>
>> I want to compare different sets of related sub-functions which should
>> be used alternatively by the same top-level function. Sets of related
>> functions should be bound together (as lists) and the workspace should
>> be as clean as possible.
>>
>> Finally, these functions are to be called by top-level functions that
>> work with such sets.
>>
>> What's the best way to do this?
>>
>> - clutter the workspace with lots of functions?
>> OR:
>> - ignore "notes about possible problems"
>> OR:
>> - a third way?
>>
>> Thanks in advance
>>
>> Thomas P.

>> An example:
>>
>> flistA <- list(
>>foo = function() {
>> 1:10
>>},
>>bar = function() {
>>  log(foo())
>>}
>> )
>>

[... main part of the example deleted, see original posting (TP)]

>> m2()
>> # Error in bar() : could not find function "foo"
> 
> That's because the environment of bar was the evaluation frame in effect 
> at the time it was created, and foo wasn't in that.  bar looks in its 
> environment for non-local bindings.

Yes, of course. It simply shows that one can use non-nested functions
but not interdependend functions without using attach or environment
manipulations.

> 
> Gabor's function edits the environment.

Yes, that's the intention and it works for me, but several people call
it strange ;-)

>> m3()
>> # works, but even in that case we get problems
>> # if we do this in a package:
>>
>> # * checking R code for possible problems ... NOTE
>> # bar: no visible global function definition for 'foo'
> 
> This is a spurious error:  codetools can't follow the strange stuff 
> you're doing.

It is understandingly, that codetools cannot detect this. What about a
mechanism (e.g. a declaration in the NAMESPACE) where one can state that
one knows about this.

> I'd say the best approach would be to use lots of little functions, and 
> a namespace to hide them.  Then codetools will be happy.  For example,
> 
> Afoo <- function() {
>  1:10
> }
> Abar <-function() {
>   log(Afoo())
> }
> fListA <- list(foo = Afoo, bar = Abar)

Yes, that's version A) after my nomencature.

> 
> This won't allow global references to foo or bar to escape the watchful 
> eye of codetools; if you want those, you'd do something like
> 
> foo <- function() stop("foo not initialized")
> bar <- function() stop("bar not initialized")

Yes, seems like an approach using dummy functions.

> and later have
> 
> foo <- makefun1(fListA)
> bar <- makefun2(fListA)

... but I don't understand yet why you do it this way. In my example foo
and bar are the "sub-functions" but makefun would return the toplevel
functions.

> Duncan Murdoch

Nevertheless, thanks a lot for your assistance. I'm still a little bit
optimistic that there may be a general alternative and I should tolerate
C) (notes of codetools) a little time, but (if there is no alterative)
use a combination between solution A) (default functions) or B) (dummies).


Thomas P.

-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie
01062 Dresden
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] experiments with slot functions and possible problems NOTE

2008-01-21 Thread Thomas Petzoldt
Duncan Murdoch wrote:
> On 1/21/2008 9:58 AM, Thomas Petzoldt wrote:
>> Hello Duncan,
>>
>> thank you very much for your prompt reply. When I interpret your answer
>> correctly there seems to be no alternative than either:
>>
>> A) using lots of (possibly private) functions in the package or,
>> B) define dummies for all functions which are in such lists or,
>> C) ignore the NOTE, knowing that it is spurious (BTW: there are several
>> prominent packages on CRAN with unresolved NOTEs).

[...]

> There's another way, which is more R-like, if you really want to avoid
> lots of private functions.  That is to create your lists via a function,
> and define the functions in the lists locally within the creator.  That
> is, something like this:
>
> MakeListA <- function() {
>   foo <- function() {
>   1:10
>   }
>   bar <-function() {
>log(foo())
>   }
>   return(list(foo = foo, bar = bar))
> }
>
> fListA <- MakeListA()
>
> This avoids the explicit environment manipulations.  Because both foo
> and bar are defined locally within MakeListA, they share an environment
> there, and can see each other (and anything else you chose to define
> locally within MakeListA.)

[...]

Cool! What about the following (AFAIK getting environments is "legal" as
opposed to setting them):

MakeListA <- function() {
foo <- function() {
  1:10
}
bar <-function() {
  log(foo())
}
return(as.list(environment()))
}

fListA <- MakeListA()

makefun <- function(flist) {
   with(flist,
 function() bar() + foo()
   )
}
toplevel <- makefun(fListA)
toplevel()

## but it is not possible to naively replace functions afterwards:
fListA$bar <- function() cos(foo())
toplevel <- makefun(fListA)
toplevel()

## Error in bar() : could not find function "foo"


Note that it's the same in the "explicit" and in the environment version.

-- Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] experiments with slot functions and possible problems NOTE

2008-01-21 Thread Thomas Petzoldt
Hi Gabor,

nice to see you on this thread.
As you see, I'm back with my old problem.

Gabor Grothendieck wrote:
 > If the intention is to place fList's contents in the global
 > environment
 > then you need to specify that in addtoenv or else it assumes
 > the parent environment.

No, it was the intention to keep the global environment as clean as 
possible and to use local data structures instead.

 >> flistA <- list(foo = function () 1:10, bar = function() log(foo()))
 >> makefun <- function(fList) addtoenv(fList, .GlobalEnv)
 >> makefun(flistA)
 > $foo
 > function() {
 >1:10
 >   }
 >
 > $bar
 > function() {
 > log(foo())
 >   }
 >
 >> foo()
 >  [1]  1  2  3  4  5  6  7  8  9 10
 >> bar()
 >  [1] 0.000 0.6931472 1.0986123 1.3862944 1.6094379 1.7917595 
1.9459101
 >  [8] 2.0794415 2.1972246 2.3025851
 >
 > Note that this takes advantage of the fact that in your example
 > flistA was
 > defined in the global environment in the first place.  Had that not
 > been the
 > case we would have had to reset the environment of bar so that it
 > could find foo.
 >
 > By the way.  What about just attach(flistA) ?

You are right, attach works. I used it in another package instead of 
addtoenv, but it introduced new problems, even if I use it the following 
way:

attach(flist)
on.exit(detach(flist))

[...]

Additionally, I've never seen attach in another package.

Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] optim: why is REPORT not used in SANN?

2008-03-16 Thread Thomas Petzoldt
Hello,

I wonder why the control parameter REPORT is not supported by method 
SANN. Looking into optim.c I found an internal constant:

#define STEPS 100

... and decreasing this to 10 helped me fine-tuning the annealing 
parameters in an actual problem.

Is there any reason why not passing nREPORT to samin and setting 
something like:

STEPS = nREPORT / tmax


Thomas P.



-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggested minor patch for optim.R

2008-04-05 Thread Thomas Petzoldt
Ben Bolker wrote:
> optim ignores misspelled control parameters, so that trying
> to set (e.g.) "maxint=1000" in  the control argument silently
> does nothing.  The patch below (watch out for line breaks! also
> posted at http://www.zoo.ufl.edu/bolker/optim_patch.R , and
> http://www.zoo.ufl.edu/bolker/optim_new.R) adds
> three lines to optim.R that issue a warning if any names of
> elements of "control" fail to match the internal object that
> contains the defaults.
> 
>Here is code that shows the behavior:

[... details deleted]

https://stat.ethz.ch/pipermail/r-devel/2008-March/048710.html

Ben,

another issue of optim is that I don't see any reason why the REPORT 
control parameter is evaluated by "BFGS" and "L-BFGS-B" only but not, 
for example, by "SANN", see:

https://stat.ethz.ch/pipermail/r-devel/2008-March/048710.html

Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optim: why is REPORT not used in SANN?

2008-04-06 Thread Thomas Petzoldt
Martin Maechler wrote:
>>>>>> "TP" == Thomas Petzoldt <[EMAIL PROTECTED]>
>>>>>> on Sun, 16 Mar 2008 13:50:55 +0100 writes:
> 
> TP> Hello, I wonder why the control parameter REPORT is not
> TP> supported by method SANN. Looking into optim.c I found
> TP> an internal constant:
> 
> TP> #define STEPS 100
> 
> TP> ... and decreasing this to 10 helped me fine-tuning the
> TP> annealing parameters in an actual problem.
> 
> TP> Is there any reason why not passing nREPORT to samin and
> TP> setting something like:
> 
> TP> STEPS = nREPORT / tmax
> 
> Sorry to reply late (but then, rather than never ..).
> 
> You ask for reasons... I see/guess :
> 
> - the  SANN  method also was contributed from "outside" 
>   (as ?optim mentions); and the original authors may not have
>   seen a use for such more flexible monitoring.
> 
> - the R core members are probably not using 'samin' very often
> 
> - If there is a need you can write the function you are
>   optimizing in a way that it prints info.
> 
> - Nobody has contributed a well-tested patch against R-devel to
>   both code and documentation
>   which would implement your proposal ___ BACK COMPATIBLY __
>   (i.e. the default for SANN should remain to print every 100th;
>and this differs from the default for BFGS where the default
>   'REPORT' leads to output every 10th eval).
> 
> Regards,
> Martin

Well, I see. The reasons are obviously more or less historical and not 
fundamental. I can, of course, contribute an idea for a modifications of 
samin and do_optim in optim.c. However, to convert my personal hack into 
a "well-tested patch" a few additional considerations have to be 
undertaken, especially about back-compatibility:

1) the patch requires to pass the additional argument nREPORT to 
function "samin".

- Is this still back-compatible? Is it likely that other functions (e.g. 
in packages call samin directly?

- if yes (i.e. direct call is likely), what is the preferred way to 
ensure compatibility? Rename samin to samin2 and add a new 
"compatibility function" function samin that calls samin2?

- the reporting interval of samin is STEPS * tmax where
   - tmax is as documented "number of function evaluations at each 
temperature" (10) and
   - STEPS is a hard-coded constant: #define STEPS 10
   - this means that reporting is every 10 per temperature.

2) if one starts patching SANN, then one migth also think about
"Nelder-Mead" and "CG" with totally different reporting, but smaller 
(!) reporting schedule.

In contrast to this, SANN that is used for difficult systems *and* that 
(see optim.Rd) "depends critically on the settings of the control 
parameters" has a rather long reporting interval.

Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] optim: why is REPORT not used in SANN?

2008-04-07 Thread Thomas Petzoldt

Thomas Petzoldt schrieb:

Martin Maechler wrote:

"TP" == Thomas Petzoldt <[EMAIL PROTECTED]>
on Sun, 16 Mar 2008 13:50:55 +0100 writes:

TP> Hello, I wonder why the control parameter REPORT is not
TP> supported by method SANN. Looking into optim.c I found
TP> an internal constant:

TP> #define STEPS 100

TP> ... and decreasing this to 10 helped me fine-tuning the
TP> annealing parameters in an actual problem.

TP> Is there any reason why not passing nREPORT to samin and
TP> setting something like:

TP> STEPS = nREPORT / tmax

Sorry to reply late (but then, rather than never ..).

You ask for reasons... I see/guess :

- the  SANN  method also was contributed from "outside" 
  (as ?optim mentions); and the original authors may not have

  seen a use for such more flexible monitoring.

- the R core members are probably not using 'samin' very often

- If there is a need you can write the function you are
  optimizing in a way that it prints info.

- Nobody has contributed a well-tested patch against R-devel to
  both code and documentation
  which would implement your proposal ___ BACK COMPATIBLY __
  (i.e. the default for SANN should remain to print every 100th;
   and this differs from the default for BFGS where the default
  'REPORT' leads to output every 10th eval).

Regards,
Martin


Well, I see. The reasons are obviously more or less historical and not 
fundamental. I can, of course, contribute an idea for a modifications of 
samin and do_optim in optim.c. However, to convert my personal hack into 
a "well-tested patch" a few additional considerations have to be 
undertaken, especially about back-compatibility:


1) the patch requires to pass the additional argument nREPORT to 
function "samin".


- Is this still back-compatible? Is it likely that other functions (e.g. 
in packages call samin directly?


- if yes (i.e. direct call is likely), what is the preferred way to 
ensure compatibility? Rename samin to samin2 and add a new 
"compatibility function" function samin that calls samin2?


- the reporting interval of samin is STEPS * tmax where
   - tmax is as documented "number of function evaluations at each 
temperature" (10) and

   - STEPS is a hard-coded constant: #define STEPS 10
   - this means that reporting is every 10 per temperature.

2) if one starts patching SANN, then one migth also think about
"Nelder-Mead" and "CG" with totally different reporting, but smaller 
(!) reporting schedule.


In contrast to this, SANN that is used for difficult systems *and* that 
(see optim.Rd) "depends critically on the settings of the control 
parameters" has a rather long reporting interval.


Thomas P.



o.k. here is it. The patch works by re-using trace for SANN -- as 
suggested by Martin Maechler off-list after my first suggestion. The new 
patch avoids an extended parameter list of samin and therefore the 
necessity to modify the API.


The patched files and a few test examples are also on

http://hhbio.wasser.tu-dresden.de/projects/temp/optim/

Thank you for consideration

Thomas P.



Index: src/main/optim.c
===
--- src/main/optim.c(revision 45155)
+++ src/main/optim.c(working copy)
@@ -268,6 +268,7 @@
 else if (strcmp(tn, "SANN") == 0) {
 tmax = asInteger(getListElement(options, "tmax"));
 temp = asReal(getListElement(options, "temp"));
+if (trace) trace = asInteger(getListElement(options, "REPORT"));
 if (tmax == NA_INTEGER) error(_("'tmax' is not an integer"));
 if (!isNull(gr)) {
 if (!isFunction(gr)) error(_("'gr' is not a function"));
@@ -1082,7 +1083,6 @@
 
 
 #define E1 1.7182818  /* exp(1.0)-1.0 */
-#define STEPS 100
 
 void samin(int n, double *pb, double *yb, optimfn fminfn, int maxit,
   int tmax, double ti, int trace, void *ex)
@@ -1100,6 +1100,9 @@
 int k, its, itdoc;
 double t, y, dy, ytry, scale;
 double *p, *dp, *ptry;
+
+if (trace <= 0)
+  error(_("REPORT must be > 0 (method = \"SANN\")"));
 
 if(n == 0) { /* don't even attempt to optimize */
*yb = fminfn(n, pb, ex);
@@ -1138,7 +1141,7 @@
}
its++; k++;
}
-   if ((trace) && ((itdoc % STEPS) == 0))
+   if ((trace) && ((itdoc % trace) == 0))
Rprintf("iter %8d value %f\n", its - 1, *yb);
itdoc++;
 }
@@ -1151,4 +1154,3 @@
 }
 
 #undef E1
-#undef STEPS
Index: src/library/stats/man/optim.Rd
===
--- src/library/stats/man/optim.Rd  (revision 45026)
+++ src/library/stats/man/optim.Rd  (working copy)
@@ -133,9 +133,11 @@
 

Re: [Rd] getNativeSymbolInfo fails with Fortran symbol.

2008-04-16 Thread Thomas Petzoldt
Prof Brian Ripley wrote:
> It's a bug -- unlike is.loaded, getNativeSymbolInfo seems unaware of 
> Fortran names unless registered.
> 
> Will be fixed in 2.7.0.
> 
> On Wed, 9 Apr 2008, [EMAIL PROTECTED] wrote:
> 
>>
>> In the following code routine 'initaquaphy' is defined in Fortran,
>> and dynamically loaded into R.:
>>
>> test.f:
>>
>>
>>  subroutine initaquaphy(odeparms)
>>
>>  external odeparms
>>  double precision pars(19)
>>  common /myparms/pars
>>
>>   call odeparms(19, pars)
>>
>>  return
>>  end
>>
>> $ R CMD SHLIB Aquaphy.f
>> gfortran   -fpic  -g -O2 -c test.f -o test.o
>> gcc -std=gnu99 -shared -L/usr/local/lib -o test.so test.o  -lgfortran -lm
>>
>>
>> and linked into the package dll (or so).  Help for is.loaded() and
>> getNativeSymbolInfo() say not to use symbol.For() to convert
>> to Fortran-specific symbols.  However, on Linux, getNativeSymbolInfo
>> is unable to find 'initaquaphy' in 'test.so', but does find
>> 'initaquaphy_'.  Note that is.loaded() works as advertised.  Furthermore,
>> this code works in Windows, R-2.6.2patched44759.
>>
>> triggerbug.R:
>>
>> system("R CMD SHLIB test.f")
>> dyn.load(paste("test",.Platform$dynlib.ext,sep=""))
>> is.loaded("initaquaphy", PACKAGE="test")
>> getNativeSymbolInfo("initaquaphy_", PACKAGE="test")
>> getNativeSymbolInfo("initaquaphy", PACKAGE="test")
>> cat("All Done")
>>
>> Resulting in:
>>
>>> source("triggerbug.R", echo=TRUE, print.eval=TRUE)
>>> system("R CMD SHLIB test.f")
>> gfortran   -fpic  -g -O2 -c test.f -o test.o
>> gcc -std=gnu99 -shared -L/usr/local/lib -o test.so test.o  -lgfortran -lm
>>
>>> dyn.load(paste("test",.Platform$dynlib.ext,sep=""))
>>> is.loaded("initaquaphy", PACKAGE="test")
>> [1] TRUE
>>
>>> getNativeSymbolInfo("initaquaphy_", PACKAGE="test")
>> $name
>> [1] "initaquaphy_"
>>
>> $address
>> 
>> attr(,"class")
>> [1] "NativeSymbol"
>>
>> $package
>> DLL name: test
>> Filename: /home/setzer/tasks/Programming_Projects/test.so
>> Dynamic lookup: TRUE
>>
>> attr(,"class")
>> [1] "NativeSymbolInfo"
>>
>>> getNativeSymbolInfo("initaquaphy", PACKAGE="test")
>> Error in FUN("initaquaphy"[[1L]], ...) :
>>  no such symbol initaquaphy in package test
>> Have I misunderstood the help page, or is this a bug?
>>
>> --please do not edit the information below--
>>
>> Version:
>> platform = i686-pc-linux-gnu
>> arch = i686
>> os = linux-gnu
>> system = i686, linux-gnu
>> status = beta
>> major = 2
>> minor = 7.0
>> year = 2008
>> month = 04
>> day = 07
>> svn rev = 45159
>> language = R
>> version.string = R version 2.7.0 beta (2008-04-07 r45159)
>>
>> Locale:
>> LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C
>>
>> Search Path:
>> .GlobalEnv, package:deSolve, package:stats, package:graphics, 
>> package:grDevices, package:utils, package:datasets, package:methods, 
>> Autoloads,
>> package:base
>>
>> R. Woodrow Setzer, Ph. D.
>> National Center for Computational Toxicology
>> http://www.epa.gov/comptox
>> US Environmental Protection Agency
>> Mail Drop B205-01/US EPA/RTP, NC 27711
>> Ph: (919) 541-0128Fax: (919) 541-1194
>>
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
> 

Many thanks to Prof. Brian Ripley

for fixing the above mentioned issue for recent versions
of R. Unfortunately, the problem seems to persist on Linux x86/64.

The example is almost the same as above:

 > getNativeSymbolInfo("iniaqua", PACKAGE = "deSolve")$address
Error in FUN("iniaqua"[[1L]], ...) :
   no such symbol iniaqua in package deSolve
 > getNativeSymbolInfo("iniaqua_", PACKAGE = "deSolve")$address

attr(,"class")
[1] "NativeSymbol"


Thomas Petzoldt


platform "x86_64-unknown-linux-gnu"
arch "x86_64"
os "linux-gnu"
system "x86_64, linux-gnu"
status "Under development (unstable)"
major "2"
minor "8.0"
year "2008"
month "04"
day "15"
`svn rev` "45347"
version.string "R version 2.8.0 Under development (unstable) (2008-04-15 
r45347)"

g++ (GCC) 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)


-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] Fortran underscore problem persists on Linux x86/64 (PR#11206)

2008-04-19 Thread thomas . petzoldt
Full_Name: Thomas Petzoldt
Version: R 2.8.0 devel, svn version 45389
OS: Linux x86/64 Ubuntu 7.1
Submission from: (NULL) (217.235.62.12)


In contrast to all other tested operating systems a call of Fortran functions on
Linux x86/64 requires an appended underscore.

The problem occured with package deSolve
(http://r-forge.r-project.org/projects/desolve/)


See also:

http://tolstoy.newcastle.edu.au/R/e4/devel/08/04/1224.html

Relevant code snippets

In R:

> getNativeSymbolInfo("iniaqua", PACKAGE = "deSolve")$address
Error in FUN("iniaqua"[[1L]], ...) :
   no such symbol iniaqua in package deSolve
 > getNativeSymbolInfo("iniaqua_", PACKAGE = "deSolve")$address

attr(,"class")
[1] "NativeSymbol"


In Aquaphy.f:

 subroutine iniaqua(odeparms)

  external odeparms
  double precision pars(19)
  common /myparms/pars

   call odeparms(19, pars)

  return
  end

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Fortran underscore problem persists on Linux x86/64 (PR#11206)

2008-04-20 Thread Thomas Petzoldt
Dear Prof. Ripley,

thank you very much for correcting the treatment of 'extra underscore' 
that in fact solved our problem with the Fortran interface on my Ubuntu 
  7.1 x86/64, even with g77 3.4.6. I accidentally installed g77 instead 
of gfortran (4.x), sorry for my limited knowledge about that. Is it 
still necessary to provide detailed information about all involved 
compilers and symbol tables?


Thomas Petzoldt


-- 
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] odesolve dynload example

2008-09-16 Thread Thomas Petzoldt

Hi Matthew,

thank you for the bug report, Woodrow Setzer just uploaded a minor 
bugfix and I assume he informed you off-list.


One additional note for the records:

We suggest odesolve users should switch over to the new deSolve package. 
It is maintained by the the same (and two additional) authors like 
odesolve. It is almost ;-) fully compatible to odesolve and will replace 
it completely. The authors renamed it to deSolve, because it's able to 
solve also DAE and PDE systems, not only ODEs, so the name odesolve was 
too narrow and is now deprecated.


Note that deSolve has a package vignette about writing models in 
compiled code.


Thomas Petzoldt



Redding, Matthew wrote:

Hello R Developers,

This is my first foray into using c-code with R, so please forgive my
foolishness.
I had a look at the archives and did not find anything on this, so
hopefully I am not doubling up.

I have tried to use R cmd to create an object file from the odesolve
dynload example.
I am using windows and have just installed rtools, and have the latest
version of stable R (2..7.2).

This is what happened:

C:\Program Files\R\R-2.7.2\library\odesolve\dynload\c>Rcmd SHLIB mymod.c
making mymod.d from mymod.c
windres --preprocessor="gcc -E -xc -DRC_INVOKED" -I
C:/PROGRA~1/R/R-27~1.2/include  -i mymod_res.rc -o mymod_res.o
gcc  -std=gnu99  -shared -s  -o mymod.dll mymod.def mymod.o mymod_res.o
-LC:/PROGRA~1/R/R-27~1.2/bin-lR
Cannot export myderivs: symbol not found
Cannot export myjac: symbol not found
Cannot export mymod: symbol not found
mymod.o: In function `mymod':
/home/setzer/tasks/Programming_Projects/odesolve/odesolve/inst/dynload/c
/mymod.c:14: undefined reference to `GLOBAL_OFFSET_TA
BLE_'
mymod.o: In function `myderivs':
/home/setzer/tasks/Programming_Projects/odesolve/odesolve/inst/dynload/c
/mymod.c:21: undefined reference to `GLOBAL_OFFSET_TA
BLE_'
mymod.o: In function `myjac':
/home/setzer/tasks/Programming_Projects/odesolve/odesolve/inst/dynload/c
/mymod.c:30: undefined reference to `GLOBAL_OFFSET_TA
BLE_'
collect2: ld returned 1 exit status
make: *** [mymod.dll] Error 1

Any ideas what I have not got set up properly? What do I need to do to
get this firing? 
Advice appreciated.


Kind regards, 


Matt Redding
DISCLAIMER**...{{dropped:15}}

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel




--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] why is \alias{anRpackage} not mandatory?

2008-10-06 Thread Thomas Petzoldt

Dear R developers,

if one uses package.skeleton() to create a new package, then a file 
anRpackage.Rd with the following entries is prepared:


\name{anRpackage-package}
\alias{anRpackage-package}
\alias{anRpackage}
\docType{package}


Packages created this way have a definite entry or overview page, so:

?anRpackage

gives new users of a certain package a pointer where to start reading.

This is similar for packages which have the same name as their main 
workhorse function, e.g. zoo or nlme, but there are many packages which 
don't have an \alias{anRpackage}.


"Writing R Extensions", sec. 2.1.4 says:

"Packages may have an overview man page with an \alias pkgname-package, 
e.g. `utils-package' for the utils package, when package?pkgname will 
open that help page. If a topic named pkgname does not exist in another 
Rd file, it is helpful to use this as an additional \alias."


My question: what speaks against making this sentence more pronounced 
and why not NOTE-ing a missing package alias in the package check?



Thomas Petzoldt



--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie
01062 Dresden
GERMANY

http://tu-dresden.de/Members/thomas.petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] why is \alias{anRpackage} not mandatory?

2008-10-06 Thread Thomas Petzoldt

Duncan Murdoch wrote:

Thomas Petzoldt wrote:

Dear R developers,

if one uses package.skeleton() to create a new package, then a file 
anRpackage.Rd with the following entries is prepared:


\name{anRpackage-package}
\alias{anRpackage-package}
\alias{anRpackage}
\docType{package}


Packages created this way have a definite entry or overview page, so:

?anRpackage

gives new users of a certain package a pointer where to start reading.

This is similar for packages which have the same name as their main 
workhorse function, e.g. zoo or nlme, but there are many packages 
which don't have an \alias{anRpackage}.


"Writing R Extensions", sec. 2.1.4 says:

"Packages may have an overview man page with an \alias 
pkgname-package, e.g. `utils-package' for the utils package, when 
package?pkgname will open that help page. If a topic named pkgname 
does not exist in another Rd file, it is helpful to use this as an 
additional \alias."


My question: what speaks against making this sentence more pronounced 
and why not NOTE-ing a missing package alias in the package check?


  
Not everybody likes the idea of the overview man page, so when I wrote 
that I left it weak.  Some of the disadvantages:


You speak about the disadvantages but there are, of course, obvious 
advantages. Almost all scientific papers start with an abstract, why not 
requesting one for software packages, at least for new ones?


- there are lots of packages without one, so this would create a lot of 
work for people to add them.


No, I don't think that this is too much work. Positively speaking, it's 
one small contribution to bring more light into the exponentially 
growing haystack.


What about starting to advertise the use of \alias{anRpackage}, i.e. a 
short article in R News and subsequently an email to the developers.


- the ones that do exist tend to include outdated information.  People 
update the DESCRIPTION file but forget to update the corresponding 
information in the overview.


This is in fact a problem. Suggestions:

- propose basic style guidelines (in an R-News article)
- allow variables in .Rd files (your idea to allow "Sweave like 
constructs" may be even better). In addition to entries from 
DESCRIPTION, one can think also about importing data from CITATION and 
possibly also from other resources.


- in general there's a lot of dissatisfaction with the Rd format, so 
there's reluctance to invest any more effort in it.


You are right, .Rd has its limitations, but as you say, there is nothing 
better available in the moment. (BTW: I heard rumours at useR! about 
discussions on a meta documentation format? Is there any public 
information about this??)


It would probably be a good idea to generate one automatically if not 
provided by the author, at build or install time:  this would address 
the first point.  


A reasonable idea -- at least if combined with a motivating request to 
package authors to provide an own one.


I've been slowly working on some fixes that address 
the second point.  (The current idea is to use Sweave-like constructs to 
import things from the DESCRIPTION file at install time.)  There's no 
way to address the third point other than providing a better format, and 
I don't see any prospect of that happening.


So if there are no advances in that direction I see no other choice than 
using the existing mechanisms! Recently, I had several contacts with 
package authors who were not even aware about the possibility of 
providing a package information .Rd file.



Duncan Murdoch



Thanks, Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] why is \alias{anRpackage} not mandatory?

2008-10-06 Thread Thomas Petzoldt

Duncan Murdoch wrote:

On 06/10/2008 8:06 AM, Thomas Petzoldt wrote:

Duncan Murdoch wrote:

Thomas Petzoldt wrote:

Dear R developers,

if one uses package.skeleton() to create a new package, then a file 
anRpackage.Rd with the following entries is prepared:


\name{anRpackage-package}
\alias{anRpackage-package}
\alias{anRpackage}
\docType{package}


Packages created this way have a definite entry or overview page, so:

?anRpackage

gives new users of a certain package a pointer where to start reading.

This is similar for packages which have the same name as their main 
workhorse function, e.g. zoo or nlme, but there are many packages 
which don't have an \alias{anRpackage}.


"Writing R Extensions", sec. 2.1.4 says:

"Packages may have an overview man page with an \alias 
pkgname-package, e.g. `utils-package' for the utils package, when 
package?pkgname will open that help page. If a topic named pkgname 
does not exist in another Rd file, it is helpful to use this as an 
additional \alias."


My question: what speaks against making this sentence more 
pronounced and why not NOTE-ing a missing package alias in the 
package check?


  
Not everybody likes the idea of the overview man page, so when I 
wrote that I left it weak.  Some of the disadvantages:


You speak about the disadvantages but there are, of course, obvious 
advantages. Almost all scientific papers start with an abstract, why 
not requesting one for software packages, at least for new ones?


We already require one in the DESCRIPTION file for all packages, which 
you can see with


library(help=packagename)

This is related to my first two points:  people have already done this 
work so they are reluctant to do it again, and duplicate information is 
a bad idea.



I agree, and I also don't like duplicate inconsistent "information", but 
simply try the following:


options(htmlhelp=TRUE)
library(help="base")

The result is now displayed in text format and new users don't know how 
to proceed. I say new users, because an experienced user knows what to 
do ... and if nothing helps he makes a grep over the sources.


I think the R help system is too fragmented:  it's hard to discover all 
the different types of help that are already there (Rd files, 
DESCRIPTION files, vignettes, the manuals, NEWS, CHANGES, ChangeLogs, 
SVN logs, source comments, mailing lists, web pages and publications, 
...).  I think having a ?packagename man page is a good place for a 
single starting point, and I consider packages without one to be poorly 
documented.  But obviously, not everyone agrees.


*I* agree -- completely with the that paragraph.

- there are lots of packages without one, so this would create a lot 
of work for people to add them.


No, I don't think that this is too much work. Positively speaking, 
it's one small contribution to bring more light into the exponentially 
growing haystack.


I agree, and I even added these to all the packages under my control: 
but there are hundreds of package authors, and some have different 
priorities than you and me.


O.K., I see, so I suggest to add an additional motivating sentence to:

http://developer.r-project.org/Rds.html

and possibly an automatism that shows (or converts) the output of

library(help="foo")

to a formatted page in the appropriate help format (e.g. html).

What about starting to advertise the use of \alias{anRpackage}, i.e. a 
short article in R News and subsequently an email to the developers.


I would have thought that putting this into NEWS and Writing R 
Extensions was the right way to advertise it.  If people don't read 
those, why would you think they'll read R News?  But more is better, so 
go ahead and submit an article to R News.


People like me may read "Writing R Extensions" several times and then 
have a look into some of the most prominent packages and get insecure, 
as only few use this mechanism.


I don't like robot mailings, so I wouldn't appreciate an email on this. 
 I don't recommend that you send one.


Beware, not at all! But I think it was good to open this thread on 
r-devel  :-)




- the ones that do exist tend to include outdated information.  
People update the DESCRIPTION file but forget to update the 
corresponding information in the overview.


This is in fact a problem. Suggestions:

- propose basic style guidelines (in an R-News article)
- allow variables in .Rd files (your idea to allow "Sweave like 
constructs" may be even better). In addition to entries from 
DESCRIPTION, one can think also about importing data from CITATION and 
possibly also from other resources.


- in general there's a lot of dissatisfaction with the Rd format, so 
there's reluctance to invest any more effort in it.


You are right, .Rd has its limitations, but as you say, there is 
nothing better available in the moment. (BTW: I heard 

Re: [Rd] why is \alias{anRpackage} not mandatory?

2008-10-06 Thread Thomas Petzoldt

Dear Hadley,

thank you very much for your comments.

hadley wickham wrote:

- there are lots of packages without one, so this would create a lot of
work for people to add them.

No, I don't think that this is too much work. Positively speaking, it's one
small contribution to bring more light into the exponentially growing
haystack.


It may not be much work for you, but I find any additional
requirements to the package format to be a real pain.  I have ~10
packages on CRAN and having to go through and add this extra
information all at once is a big hassle.  R releases tend to happen in
the middle of the US academic semester when I have a lot of other
things on my plate.


O.K., but the discussion with Duncan shows:

- the required information is already available (in DESCRIPTION),
- one can think about ways to generate the page automatically for 
existing packages,

- the intro can be short and should link to other pages or PDFs,
- one should avoid doubling and inconsistency.


Additionally, I find that rdoc is the wrong format for lengthy
explanation and exposition - a pdf is much better - and I think that
the packages already have a abstract: the description field in
DESCRIPTION.  


o.k., but abstract may be (technically) in the wrong format and does not 
point to the other relevant parts of the package documentation.



The main problem with vignettes at the moment is that
they must be sweave, a format which I don't really like.  I wish I
could supply my own pdf + R code file produced using whatever tools I
choose.


> Hadley

I like Sweave, and it is also possible to include your own PDFs and R 
files and then to reference them in anRpackage.Rd.


Thomas P.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] why is \alias{anRpackage} not mandatory?

2008-10-06 Thread Thomas Petzoldt

hadley wickham wrote:

It may not be much work for you, but I find any additional
requirements to the package format to be a real pain.  I have ~10
packages on CRAN and having to go through and add this extra
information all at once is a big hassle.  R releases tend to happen in
the middle of the US academic semester when I have a lot of other
things on my plate.

O.K., but the discussion with Duncan shows:

- the required information is already available (in DESCRIPTION),
- one can think about ways to generate the page automatically for existing
packages,
- the intro can be short and should link to other pages or PDFs,
- one should avoid doubling and inconsistency.


I'm obviously not going to object if it's done automatically, and I
already strive to avoid doubling and inconsistency by producing most
my documentation algorithmically.  I think you are being cavalier by
not caring about the extra work you want package authors to do.


Sorry if my question was misunderstood this way, but I have not 
requested additional work, I simply asked "why is \alias{anRpackage} not 
mandatory?"


The answer was, that they are problems with inconsistencies that can be 
technically solved and that it may be too much work for some package 
authors with lots of packages (can also be solved with technical means), 
but that other users and developers would enjoy it to have such a 
starting point.


O.K., I agree that the suggestion of NOTE-ing a missing 
\alias{anRpackage} during package check was a bad idea (currently ;-), 
but that one can think about a combination of a technical means and an 
optional entry, analogously to the CITATION file.





Additionally, I find that rdoc is the wrong format for lengthy
explanation and exposition - a pdf is much better - and I think that
the packages already have a abstract: the description field in
DESCRIPTION.

o.k., but abstract may be (technically) in the wrong format and does not
point to the other relevant parts of the package documentation.


Then I don't think you should call what you want an abstract.


Some sort of abstract, overview or, more precise, an *entry point*.


The main problem with vignettes at the moment is that
they must be sweave, a format which I don't really like.  I wish I
could supply my own pdf + R code file produced using whatever tools I
choose.

I like Sweave, and it is also possible to include your own PDFs and R files
and then to reference them in anRpackage.Rd.


Yes, but they're not vignettes - which means they're not listed under
vignette() and it's yet another place for people to look for
documentation.


You are right, they are not vignettes in the strict sense, but they can 
be listed in the help index of the package, the place where the majority 
of "normal R users" starts to look.



ThPe

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] why is \alias{anRpackage} not mandatory?

2008-10-09 Thread Thomas Petzoldt

Dear R Developers,

first of all, many thanks for the constructive discussion. My question
was related to the use of existing mechanisms that (in my opinion) would
help to make package documentation more user-friendly. I agree that
additional restrictions/requirements on packages that do not not have
explicit objectives on performance or validity have to be avoided.

Thomas Petzoldt



Summary and Wish-list

1 A recommendation to provide a file "foo-package.Rd" and an
\alias{foo} was already given in "Writing R Extensions".

2 In order to ensure consistency between foo-package.Rd, DESCRIPTION and
other sources of information, a mechanism to use variables and/or macros
in .Rd format is desirable.

3 There may be reasons, where manual creation and maintenance of
foo-package.Rd is not wanted, i.e. work load or danger of information
inconsistency. For such cases, an automated mechanism during package
installation may be helpful.

Already existing functions like

library(help="foo")
  or
promptPackage("foo", final=TRUE)

can do the job but may require extensions (hyperlinks).

4 The standard help format on Windows .chm should also find a way to
provide hyperlinks to package vignettes (and other pdfs), either
directly in the package index (as in html) or in foo-package.Rd

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] nlminb: names of parameter vector not passed to objective function

2008-12-03 Thread Thomas Petzoldt

Dear R developers,

I tried to use nlminb instead of optim for a current problem (fitting 
parameters of a differential equation model). The PORT algorithm 
converged much better than any of optim's methods and the identified 
parameters are plausible. However, it took me a while before spotting 
the reason of a technical problem that nlminb, in contrast to optim, 
does not pass names of the start parameters to the objective function.


Please find below a minimum reproducible example. There is, of course, a 
workaround, but in order to make optim and nlme more compatible I would 
ask whether it would be possible to change this idiosyncratic behavior?


Tested with:

R version 2.8.0 Patched (2008-11-04 r46830) i386-pc-mingw32

and also

R version 2.9.0 Under development (unstable) (2008-12-03 r47039)
i386-pc-mingw32

Thanks a lot

Thomas Petzoldt



set.seed(3577) # make it reproducible

## 1) example taken from  ?nlminb -
x <- rnbinom(100, mu = 10, size = 10)
hdev <- function(par) {
-sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
}
nlminb(c(20, 20), hdev, lower = 0.001, upper = Inf)
## --> works without problems

## 2) same example, but with named vectors -
hdev <- function(par) {
cat(names(par), "\n")  # show what happens
-sum(dnbinom(x, mu = par["mu"], size = par["size"], log = TRUE))
}
start <- c(mu=20, size=20)

optim(start, hdev, lower = 0.001, upper = Inf, method="L-BFGS-B")
## --> works without problems

## 3) THE PROBLEM
nlminb(start, hdev, lower = 0.001, upper = Inf)
## --> $objective is NA because names of "start" are not passed through

## 4) workaround ---
hdev <- function(par, pnames) {
names(par) <- pnames
-sum(dnbinom(x, mu = par["mu"], size = par["size"], log = TRUE))
}

nlminb(start, hdev, pnames = names(start), lower = 0.001, upper = Inf)

## --> works, but is it possible to improve nlminb
## so that the workaround can be avoided ?






--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologie[EMAIL PROTECTED]
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] nlminb: names of parameter vector not passed to objective function

2008-12-03 Thread Thomas Petzoldt

Dear Prof. Ripley,

thank you very much for the fast response. I am very grateful for all 
the work that the R Core does and so I try to contribute my humble part 
as a tester.


Prof Brian Ripley wrote:

On Wed, 3 Dec 2008, Thomas Petzoldt wrote:


Dear R developers,

I tried to use nlminb instead of optim for a current problem (fitting 
parameters of a differential equation model). The PORT algorithm 
converged much better than any of optim's methods and the identified 
parameters are plausible. However, it took me a while before spotting 
the reason of a technical problem that nlminb, in contrast to optim, 
does not pass names of the start parameters to the objective function.


Please find below a minimum reproducible example. There is, of course, 
a workaround, but in order to make optim and nlme more compatible I 
would ask whether it would be possible to change this idiosyncratic 
behavior?


The 'idiosyncratic behavior' is to expect that a new vector of 
parameters will magically inherit names from the start vector.  optim() 
was changed (and documented) because some users asked for this, and if a 
user who wants it for nlminb provides a tested patch, I am sure it will 
be considered.


O.K., than I will make my, surely naive, suggestion to change file:

src/library/stats/R/nlminb.R

As far I can oversee it, names are dropped at the beginning in:

 ## Establish the working vectors and check and set options
n <- length(par <- as.double(start))

so it may be sufficient to add something like:

names(par) <- names(start)

anywhere before:

   assign(".par", par, envir = rho)

This change was sufficient to make my example working and (at a first 
look) I did not find negative side-effects. At least:


R CMD check stats

passed without problems. I had also a look into port.c and found nothing 
obvious there so I, again naively, assume that there is (hopefully) 
nothing to do at the C level. It would be very kind if someone more 
experienced can validate (or falsify) this.


Thank you very much

Thomas Petzoldt



Index: nlminb.R
===
--- nlminb.R(revision 47039)
+++ nlminb.R(working copy)
@@ -48,6 +48,7 @@
 ## Establish the objective function and its environment
 obj <- quote(objective(.par, ...))
 rho <- new.env(parent = environment())
+names(par) <- names(start)
 assign(".par", par, envir = rho)

 ## Create values of other arguments if needed

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] nlminb: names of parameter vector not passed to objective function (fixed)

2008-12-07 Thread Thomas Petzoldt

NEWS, rev. 47094 now says:

o   nlminb() copies names from 'start' to the parameter vector
used (for consistency with optim()).



Dear Prof. Ripley,

thank you very much for doing this.

Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] New package test results available

2009-02-07 Thread Thomas Petzoldt

Prof Brian Ripley schrieb:

We've added a column at

http://cran.r-project.org/web/checks/check_summary.html

of test results using the Sun Studio compiler: it is intended that these 
will be updated weekly.


The Sun Studio compiler is that used on Solaris: these runs were on the 
Linux version.  All the other platforms are using gcc 4, so this 
provides an opportunity for checking for use of gcc-specific features 
and also standards conformance (the Sun compilers have a long-time 
reputation for close conformance to the language standards).


There are known problems where packages use C++ or JNI interfaces (e.g. 
rgdal and EBImage) as the libraries and JVM were compiled under gcc's 
conventions (even though a Sun JVMi is used).  About half the packages 
using rJava segfault, which seems to a JNI issue.


Some packages use gcc-specific compiler flags:

  LogConcDEAD Matching amap geometry memisc taskPR

but the vast majority of the errors reported are C++ errors.  One class 
that may not be immediately obvious is the use of C headers in C++: you 
are supposed to write e.g.


#includd 

NOT

#include 

Symptoms of this can be seen for packages

  BayesTree EMCC MCMCfglmm MarkedPointProcess Matching Matrix
  RQuantlib RandomFields Rcpp SoPhy compHclust dpmix igraph minet
  mixer modeest monomvm multic pcaPP rgenoud robfilter segclust
  simecol subselect




The reason can also be including  (as done in simecol) that 
includes 


Do I understand it correctly that this means that including  is 
wrong in C++?
I read "Writing R extensions" several times, but was not aware that this 
was a mistake. If I replace  by  then it works on my 
systems, but I want to be certain that there are no other side effects.


Thanks in advance for clarification!

Thomas Petzoldt


--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] New package test results available

2009-02-10 Thread Thomas Petzoldt

Thomas Petzoldt wrote:

Prof Brian Ripley schrieb:

We've added a column at

http://cran.r-project.org/web/checks/check_summary.html

of test results using the Sun Studio compiler: it is intended that 
these will be updated weekly.


The Sun Studio compiler is that used on Solaris: these runs were on 
the Linux version.  All the other platforms are using gcc 4, so this 
provides an opportunity for checking for use of gcc-specific features 
and also standards conformance (the Sun compilers have a long-time 
reputation for close conformance to the language standards).


There are known problems where packages use C++ or JNI interfaces 
(e.g. rgdal and EBImage) as the libraries and JVM were compiled under 
gcc's conventions (even though a Sun JVMi is used).  About half the 
packages using rJava segfault, which seems to a JNI issue.


Some packages use gcc-specific compiler flags:

  LogConcDEAD Matching amap geometry memisc taskPR

but the vast majority of the errors reported are C++ errors.  One 
class that may not be immediately obvious is the use of C headers in 
C++: you are supposed to write e.g.


#includd 

NOT

#include 

Symptoms of this can be seen for packages

  BayesTree EMCC MCMCfglmm MarkedPointProcess Matching Matrix
  RQuantlib RandomFields Rcpp SoPhy compHclust dpmix igraph minet
  mixer modeest monomvm multic pcaPP rgenoud robfilter segclust
  simecol subselect




The reason can also be including  (as done in simecol) that 
includes 


Do I understand it correctly that this means that including  is 
wrong in C++?
I read "Writing R extensions" several times, but was not aware that this 
was a mistake. If I replace  by  then it works on my 
systems, but I want to be certain that there are no other side effects.


Thanks in advance for clarification!

Thomas Petzoldt




I changed it as requested, and include

#include 
#include 

... but still get the same error:

"simecol.cpp", line 224: Error: Overloading ambiguity between 
"floor(double)" and "std::floor(float)".

1 Error(s) detected.

http://www.r-project.org/nosvn/R.check/r-devel-linux-x86_64-sun/simecol-00install.html

What's wrong here? My code is very short and extremely simple, without 
any new objects (yet), but fact only "plain C with some C++" extension.


What I'm making wrong? Would it be necessary that we all have a Linux 
installation with Sun Studio at hand?


Thanks a lot

Thomas P.

--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R-devel/Linux x64/Sun Studio 12: Problem with Matrix

2009-02-20 Thread Thomas Petzoldt
Dear Developers,

motivated by the new Sun Studio checks I compiled R-devel and several of
our packages with Sun Studio 12 on Fedora x64.

Everything worked fine and R-devel runs, with the exception of package
Matrix where compilation crashes with the following message. The error
occurs during building of the recommended packages and also if Matrix is
compiled separately:

[...]
CC -G -lCstd  -L/opt/sun/sunstudio12/lib/amd64 -o Matrix.so CHMfactor.o
Csparse.o TMatrix_as.o Tsparse.o init.o Mutils.o chm_common.o cs.o
cs_utils.o dense.o dgCMatrix.o dgTMatrix.o dgeMatrix.o dpoMatrix.o
dppMatrix.o dsCMatrix.o dsyMatrix.o dspMatrix.o dtCMatrix.o dtTMatrix.o
dtrMatrix.o dtpMatrix.o factorizations.o ldense.o lgCMatrix.o sparseQR.o
CHOLMOD.a COLAMD.a AMD.a -L/home/user/R/R-devel/lib -lRlapack
-L/home/user/R/R-devel/lib -lRblas
-R/opt/sun/sunstudio12/lib/amd64:/opt/sun/sunstudio12/lib/amd64:/opt/sun/lib/rtlibs/amd64:/opt/sun/lib/rtlibs/amd64
 -L/opt/sun/sunstudio12/rtlibs/amd64 -L/opt/sun/sunstudio12/prod/lib/amd64 
-lfui -lfai -lfsu -lmtsk -lpthread -lm 
/opt/sun/sunstudio12/prod/lib/amd64/libc_supp.a  
/lib64/libpthread.so.0: file not recognized: File format not recognized
make: *** [Matrix.so] Error 1
ERROR: compilation failed for package ‘Matrix’
* Removing ‘/home/user/R/R-devel/library/Matrix’

Can someone help me or give me a pointer what I'm making wrong? How can
I get/include the missing shared library?

Many thanks in advance

Thomas Petzoldt


#file: config.site

CC=cc
CFLAGS="-xO5 -xc99 -xlibmil -nofstore"
CPICFLAGS=-Kpic
F77=f95
FFLAGS="-O5 -libmil -nofstore"
FPICFLAGS=-Kpic
CXX=CC
CXXFLAGS="-xO5 -xlibmil -nofstore"
CXXPICFLAGS=-Kpic
FC=f95
FCFLAGS=$FFLAGS
FCPICFLAGS=-Kpic
LDFLAGS=-L/opt/sun/sunstudio12/lib/amd64
SHLIB_LDFLAGS=-shared
SHLIB_CXXLDFLAGS="-G -lCstd"
SHLIB_FCLDFLAGS=-G
SAFE_FFLAGS="-O5 -libmil"

platform 86_64-unknown-linux-gnu
arch x86_64  
os linux-gnu   
system x86_64, linux-gnu   
status Under development (unstable)
major 2   
minor 9.0 
year  2009
month 02  
day   20  
svn rev 47964   
language R
version.string R version 2.9.0 Under development (unstable) (2009-02-20
r47964)

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-devel/Linux x64/Sun Studio 12: Problem with Matrix

2009-02-21 Thread Thomas Petzoldt

Prof Brian Ripley wrote:

This seems to be a problem with your OS installation. I have

gannet% file /lib64/libpthread.so.0
/lib64/libpthread.so.0: symbolic link to `libpthread-2.9.so'
gannet% file /lib64/libpthread-2.9.so
/lib64/libpthread-2.9.so: ELF 64-bit LSB shared object, x86-64, version 
1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, 
not stripped


gannet% rpm -q --whatprovides /lib64/libpthread-2.9.so
glibc-2.9-3.x86_64

and of course building (current) Matrix works for me.



Dear Prof. Ripley,

O.K. thanks for your quick answer, its good to know that its not an 
obvious error in config.site. Maybe it's because I started from a rather 
minimal Fedora version, so I'll try to fix my installation.


Thank you

Thomas P.


--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-devel/Linux x64/Sun Studio 12: Problem with Matrix

2009-02-22 Thread Thomas Petzoldt

Just for the record:

Thomas Petzoldt wrote:

Prof Brian Ripley wrote:

This seems to be a problem with your OS installation. I have


I tested compilation on another Fedora 10 installation:

1) fresh installation from the installation DVD 
(Fedora-10-x86_64-DVD.iso instead of the harddisk installation based on 
the the live CD)


2) online patches and updates

3) install of Java 386 packages instead of Java x64 because sunstudio 
installer did not run with default Java (was the same with the life CD)


4) downloaded R-devel svn rev 47981; rsync-recommended

5) PATH to sunstudio appended to default PATH in .bashrc;
   config.site like described in R-admin,
   note however that IMHO it should be "suncc"
   instead of "csunc" -- a typo in R-admin.texi ??

6) configure

7) make

=> everything compiled fine with the exception of Matrix which breaks 
with exactly the same error message as my former report.



gannet% file /lib64/libpthread.so.0
/lib64/libpthread.so.0: symbolic link to `libpthread-2.9.so'
gannet% file /lib64/libpthread-2.9.so
/lib64/libpthread-2.9.so: ELF 64-bit LSB shared object, x86-64, 
version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 
2.6.9, not stripped


gannet% rpm -q --whatprovides /lib64/libpthread-2.9.so
glibc-2.9-3.x86_64


I got exactly the same.


and of course building (current) Matrix works for me.


Not yet for me with Sun Studio 12; however everything was fine when 
switching back to the Gnu compilers.


As said, just for the record. The installation was intended for checking 
our own packages which now compile well with both compilers.






Dear Prof. Ripley,

O.K. thanks for your quick answer, its good to know that its not an 
obvious error in config.site. Maybe it's because I started from a rather 
minimal Fedora version, so I'll try to fix my installation.


Thank you

Thomas P.



Thomas P.



--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] CRAN package check on MacOS: sh: line 1: gs: command not found

2009-03-04 Thread Thomas Petzoldt

Dear R developers,

I recently observed a NOTE on several MaxOS X package checks:

sh: line 1: gs: command not found
!!! Error: Closing Ghostscript (exit status: 127)!
/usr/bin/texi2dvi: thumbpdf exited with bad status, quitting.


See for details:

http://www.r-project.org/nosvn/R.check/r-release-macosx-ix86/simecol-00check.html

or

http://www.r-project.org/nosvn/R.check/r-release-macosx-ix86/fxregime-00check.html


Does anybody know what's wrong here?

Thanks a lot

Thomas Petzoldt


--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] question

2009-03-07 Thread Thomas Petzoldt

Patrick Burns wrote:

One idea of program design is that users
should be protected against themselves.

It is my experience that users, especially
novices, tend to over-split items rather than
over-clump items.  The fact that items are
returned by the same function call would
argue to me that there is a connection between
the items.


Patrick Burns
patr...@burns-stat.com
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of "The R Inferno" and "A Guide for the Unwilling S User")


Hi Gabor, Patrick, Ivo and vQ,

I agree with Patrick and Gabor that it is not needed. IMHO it is good 
design that a function (in a mathematical sense) returns ONE object, let 
it a single value, a list or an S3/S4 object. This can be passed to 
another function as a whole or can be splitted to its parts according to 
different needs. If only single parts are required, than I would suggest 
to use accessor functions preferably written as generics working on 
returned S3 or S4 objects. I'm strongly against going back to the past S 
behaviour and I wonder a little bit about this discussion. I like it to 
have a clean workspace with only a few objects.


Thomas Petzoldt

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] vignette index not linked into HTML help system for package

2009-03-16 Thread Thomas Petzoldt

Dear R developers,

I observed that the html help page index entry "Read overview or browse 
directory" for package vignettes is missing in recent R-devel.


This happened on two independent computers (WinXP Prof. SP3, German) 
with R-devel compiled from sources svn rev. 48125 resp. 48128
It's the same for my own and also for more prominent packages as well 
(e.g. grid).


The vignettes and the index.html files exist and vignette() as well as 
browseVignettes() work as expected.


I have not found anything about this in NEWS or "Writing R extensions", 
which says:


"At install time an HTML index for all vignettes is automatically 
created from the \VignetteIndexEntry statements unless a file index.html 
exists in directory inst/doc. This index is linked into the HTML help 
system for each package."



Have I missed something?

Thanks a lot

Thomas Petzoldt



--
Thomas Petzoldt
Technische Universitaet Dresden
Institut fuer Hydrobiologiethomas.petzo...@tu-dresden.de
01062 Dresden  http://tu-dresden.de/hydrobiologie/
GERMANY

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel