Re: [Rd] R.app (PR#7926)

2005-06-10 Thread Simon Urbanek
On Jun 10, 2005, at 11:03 AM, [EMAIL PROTECTED] wrote:

> R terminal in aqua interface exits with
>
> 2005-06-10 16:51:03.826 R[1464] *** -[NSCFType characterAtIndex:]:  
> selector not
> recognized
> 2005-06-10 16:51:03.827 R[1464] *** NSTimer discarding exception
> 'NSInvalidArgumentException' (reason '*** -[NSCFType  
> characterAtIndex:]:
> selector not recognized') that raised during firing of timer with  
> target 3cab80
> and selector 'kickstart:'
>
> Reproduceable:
>
> mistype "demo()" like "demo()(" short after promt shows up after  
> launching

This is highly likely to be a problem reported before, reproducible  
only on slow machines and has been fixed in R.app build 1564. Please  
use the most recent build and tell us whether you can still reproduce  
the problem.

Thanks,
Simon

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R.app editor (PR#7928)

2005-06-10 Thread Simon Urbanek
You didn't use appropriate subject, didn't provide a reproducible  
example, didn't read the documentation and the problems you describe  
are most probably just user errors. (Hint: ever heard of dev.off()?)

Please stop spamming the bug report system, and read the posting guide!

Thanks,
Simon

On Jun 10, 2005, at 11:10 AM, [EMAIL PROTECTED] wrote:

> Full_Name: Christian Meisenbichler
> Version: 2.1.0a
> OS: 10.3.9
> Submission from: (NULL) (143.50.77.182)
>
>
> device pdf() produces empty files
>
> and device postscript() produces incompleete graphs

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] by should use match.fun

2005-06-12 Thread Simon Urbanek
On Jun 12, 2005, at 3:21 PM, Gabor Grothendieck wrote:

> On 6/12/05, Liaw, Andy <[EMAIL PROTECTED]> wrote:
>
>> I don't get the point.  ?by says:
>>
>
> The point is that all other functions of this sort including apply,  
> sapply,
> tapply, lapply work like that so 'by' ought to as well.
>
> Here is the example (changed to use iris) where I noticed it.   
> Suppose we
> want to create a list of rows:
>
> by(iris, row.names(iris), "(")

Umm.. why don't you just use

by(iris, row.names(iris), `(`)

In general I consider passing functions as text unnecessary - the  
only use I could think of is constructing function names from strings/ 
data and I'm not sure that is a good idea, either (it causes quite  
some performance issues) ... just my 2 pennies ...

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R 2.1.1 slated for June 20

2005-06-14 Thread Simon Urbanek
On Jun 14, 2005, at 6:16 PM, Marc Schwartz wrote:

> Interesting. Did you do anything different on the ./configure line?
>
> $ ls -l  /usr/bin/f95
> lrwxrwxrwx  1 root root 8 Jun 13 21:18 /usr/bin/f95 -> gfortran
>
> I just tried it again (having installed some FC updates) and I  
> still get g77...

g77 is probed before f95, so if you have both, g77 is picked unless  
you set F77 explicitly. The exact probe sequence is:

g77 f77 xlf frt pgf77 fort77 fl32 af77 f90 xlf90 pgf90 epcf90 f95  
fort xlf95 ifc efc pgf95 lf95 gfortran (R 2.1.1 beta)
g77 fort77 f77 xlf frt pgf77 cf77 fl32 af77 f95 fort xlf95 ifort ifc  
efc pgf95 lf95 gfortran ftn g95 f90 xlf90 pgf90 pghpf epcf90 fc (R- 
devel)

I guess Peter simply didn't install g77.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] 2.1.0a for Macs (PR#7952)

2005-06-16 Thread Simon Urbanek
On Jun 16, 2005, at 5:30 PM, [EMAIL PROTECTED] wrote:

> I installed the program about a week ago and immediately I started  
> getting a warning messages particularly when browsing my files  
> using ls(), search(), gc(),

This was fixed in R.app build 1564 and was mentioned several times on  
the Mac list. It is recommended to update your R.app. If you have  
problems with the most recent version (as of today it's build 1596),  
please let us know.

> More recently, R has begun to crash for no particular reason (no  
> large program running).  Often when I then try to restart R 2.1.0a  
> crashes during start up and in order to get it functioning again I  
> have to reinstall!

Can you, please, send us the crash report and/or the console output?  
Without any evidence reports like yours are pretty much useless  
(please read the documentation and FAQ!). Also note that R 2.1.1 will  
be released really soon so it's important to report bugs using R  
2.1.1 beta and not old versions.

> The first time this happened I decided to go back to an earlier  
> version of R that I knew would be stable but following binary  
> compilation no gui is installed on my computer and I can't run the  
> program.

What do you mean by "binary compilation" - if you feel like  
installing an older version all you have to do is to get the 2.1.0  
image and install it. However, I doubt that this will solve any  
problems, but it's completely legal to do so. You can also safely use  
older versions of R.app if you want - 2.1.0 and 2.1.0a are by  
definition binary compatible. R itself didn't change substantially  
between 2.1.0 and 2.1.0a (only a couple of bugs were fixed, most  
notably package installation). What did change was R.app and as I  
already mentioned there are good reasons for using the most recent  
version.

> If you could please advise on how to deal with these problems  
> (particularly the corruption of my system by 2.1.0a)

You didn't mention any system corruption so far - you'll have to  
explain this a bit ...

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R worked once, now will not open. Works in console, but won't graph. (PR#7953)

2005-06-16 Thread Simon Urbanek
Richard,

thank you for the report. From the log it seems to be a problem with  
your preferences. Please delete the file ~/Library/Preferences/org.R- 
project.R.plist (e.g. type
rm  ~/Library/Preferences/org.R-project.R.plist
in Terminal or simply delete than file using Finder) and let me know  
if that fixes the problem (actually if it's easy for you to do you  
could send me the file before you delete it if it turns out to be the  
problem). In addition you may want to download the latest version of  
R.app from http://www.rosuda.org/R/nightly/ .

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Failed make (PR#7978)

2005-06-29 Thread Simon Urbanek
On Jun 29, 2005, at 1:52 PM, [EMAIL PROTECTED] wrote:

> I downloaded R v2.1.1 earlier this morning to compile under Fedora  
> Core 4.
> It compiled without incident, but 'make check' failed. Below is the  
> relevant
> part of its report. Is this a known problem?

My guess is that you're using gfortran instead of g77 and you didn't  
set GFORTRAN_STDIN_UNIT to -1 nor used FLIBS to force static fortran  
libraries - please see B.5.1. "Using gfortran" in "R Installation and  
Administration".

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] How difficult is it to wrap a large C++ library with R?

2005-07-04 Thread Simon Urbanek
On Jul 4, 2005, at 1:01 PM, Bo Peng wrote:

> From what I read from R website, it is easy to wrap individual C/C+ 
> + functions but not C++ classes.

There was a couple of posts about this recently:

https://stat.ethz.ch/pipermail/r-devel/2005-April/subject.html

Look for the posts concerning C++, overloading methods and objects in  
R. If I remember correctly "Ali" had some prototype automatic wrapper  
for large classes (he wasn't happy about the performance, but that's  
another story ;) - see his 'speeding up the library' posts).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] allocation of large matrix failing

2005-07-12 Thread Simon Urbanek
James,

On Jul 12, 2005, at 5:05 PM, James Bullard wrote:

> Hello, this is probably something silly which I am doing, but I  
> cannot understand why this allocation is not happening.

I suspect it's something else -  the code  you sent us works without  
problems for me:

 > .C("foo",as.integer(333559))
numHits:333559
before allocation...
allocated oligo list...
entering looop...
[[1]]
[1] 333559

 > gc()
  used (Mb) gc trigger (Mb) max used (Mb)
Ncells 162348  4.4 35  9.4   35  9.4
Vcells  60528  0.51453055 11.1  1562032 12.0

I have also tested it with 100-times larger vector and it works as well:

 > .C("foo",as.integer(33355900))
numHits:33355900
before allocation...
allocated oligo list...
entering looop...
[[1]]
[1] 33355900

 > gc()
  used (Mb) gc trigger  (Mb)  max used   (Mb)
Ncells 162526  4.4 35   9.4359.4
Vcells  60536  0.5  120333512 918.1 150162111 1145.7

This was on both Debian Linux and OS X, but that shouldn't really  
matter I suppose... (and I don't see why it should fail). If you  
overdo it with the size you can get "Error: cannot allocate vector of  
size xxx", but it won't hang, either.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] segfault with grid and null unit (PR#8014)

2005-07-19 Thread Simon Urbanek
On Jul 19, 2005, at 6:39 AM, [EMAIL PROTECTED] wrote:

> Sourcing this code causes the R GUI to crash. I've enclosed a  
> partial backtrace
> from the crash below.

I can confirm that this crashes both on OS X (current R-devel) and  
Linux (2.1.1). The detailed stack trace (with debug symbols) from OS  
X follows.

However, I can't find any documentation on "null" units (?unit  
doesn't mention them), so I wonder whether this is just a missing  
sanity check or something more profound ...

#0  Rf_isObject (s=0x6801) at ../../../../R-devel/src/main/util.c: 
616
#1  0x003060e4 in Rf_inherits (s=0x6801, name=0x966ad80  
"unit.arithmetic") at ../../../../R-devel/src/main/util.c:624
#2  0x096651b8 in pureNullUnit (unit=0x6801, index=0,  
dd=0x113eff0) at ../../../../../../R-devel/src/library/grid/src/ 
unit.c:270
#3  0x096651fc in pureNullUnit (unit=0x10ff78e8, index=2,  
dd=0x113eff0) at ../../../../../../R-devel/src/library/grid/src/ 
unit.c:273
#4  0x09662768 in findRelWidths (layout=0x108d7528,  
relativeWidths=0x11b0b4f8, dd=0x113eff0) at ../../../../../../R-devel/ 
src/library/grid/src/layout.c:70
#5  0x09663720 in calcViewportLayout (viewport=0x10bfad60,  
parentWidthCM=12.699, parentHeightCM=12.699,  
parentContext={xscalemin = 0, xscalemax = 1, yscalemin = 0, yscalemax  
= 1}, parentgc=0xbfff76f0, dd=0x113eff0) at ../../../../../../R-devel/ 
src/library/grid/src/layout.c:463
#6  0x09669888 in calcViewportTransform (vp=0x10bfad60,  
parent=0x10c6af38, incremental=3221190720, dd=0x113eff0)  
at ../../../../../../R-devel/src/library/grid/src/viewport.c:351
#7  0x0965cefc in doSetViewport (vp=0x10bfad60, topLevelVP=157724032,  
pushing=281456440, dd=0x113eff0) at ../../../../../../R-devel/src/ 
library/grid/src/grid.c:185
#8  0x0965d40c in L_setviewport (vp=0x10bfad60, hasParent=0x10d4da68)  
at ../../../../../../R-devel/src/library/grid/src/grid.c:302
#9  0x0024a0b0 in do_dotcall (call=0x119008d8, op=0x966ad80,  
args=0x18681b0, env=0x80808080) at ../../../../R-devel/src/main/ 
dotcode.c:788
#10 0x0024da14 in do_dotcallgr (call=0x119008d8, op=0x18c5794,  
args=0x11902244, env=0x11900ccc) at ../../../../R-devel/src/main/ 
dotcode.c:1468
#11 0x00264964 in Rf_eval (e=0x119008d8, rho=0x11900ccc)  
at ../../../../R-devel/src/main/eval.c:405

Cheers,
Simon


> ==
> require(grid)
>
> sometext = "hello there\nthis is a \ntest!"
>
> pushViewport(
> viewport(
> layout=grid.layout(1,3,
> widths=unit.c(
> unit(1,"strwidth",sometext) +
> unit(2,"cm"),
> unit(1,"null")
> )
> )
> )
> )
> ==
>
> Date/Time:  2005-07-19 11:35:30.950 +0100
> OS Version: 10.4.2 (Build 8C46)
> Report Version: 3
>
> Command: R
> Path:/Volumes/George/MyApplications/R.app/Contents/MacOS/R
> Parent:  WindowServer [146]
>
> Version: 1.12 (1622)
>
> PID:10493
> Thread: 0
>
> Exception:  EXC_BAD_ACCESS (0x0001)
> Codes:  KERN_INVALID_ADDRESS (0x0001) at 0x6801
>
> Thread 0 Crashed:
> 0   libR.dylib   0x00303d1c Rf_isObject + 0 (util.c: 
> 623)
> 1   grid.so  0x060c6240 pureNullUnit + 40  
> (unit.c:270)
> 2   grid.so  0x060c6284 pureNullUnit + 108  
> (unit.c:273)
> 3   grid.so  0x060c3894 findRelWidths + 60  
> (layout.c:69)
> 4   grid.so  0x060c484c calcViewportLayout + 172
> (layout.c:464)
> 5   grid.so  0x060ca888 calcViewportTransform +  
> 1296
> (viewport.c:356)
> 6   grid.so  0x060be0a0 doSetViewport + 256  
> (grid.c:200)
> 7   grid.so  0x060be5ac L_setviewport + 76  
> (grid.c:311)
> 8   libR.dylib   0x00249318 do_dotcall + 652  
> (dotcode.c:770)
> 9   libR.dylib   0x0024cc7c do_dotcallgr + 80  
> (dotcode.c:1450)
> 10  libR.dylib   0x00263b24 Rf_eval + 1536 (eval.c: 
> 405)
> 11  libR.dylib   0x00265b1c do_set + 224 (eval.c:1309)
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] mac os x crashes with bioconductor microarray code (PR#8013)

2005-07-20 Thread Simon Urbanek
On Jul 19, 2005, at 12:34 AM, [EMAIL PROTECTED] wrote:

> R(1763) malloc: *** vm_allocate(size=346857472) failed (error code=3)
> R(1763) malloc: *** error: can't allocate region

As the error says, you're obviously running out of contiguous wired  
memory (RAM).

> Any ideas? My computer has 2gb of RAM and 100 gb free. I would  
> think that analyzing 42 microarrays would not be too difficult.

You didn't tell us anything about the size of the data - and it seems  
to be of considerable size, because it fails on allocating ca. 350MB  
bytes at once. Whether what you do is difficult or not, that I don't  
know, but obviously you don't have enough memory for what you're  
trying to do. Basically that means you should reconsider your  
approach. Once you provide more information you may want to ask  
Bioconductor users about this issue, but it's definitely not a bug in R.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Testing for English locale

2005-08-01 Thread Simon Urbanek
On Aug 1, 2005, at 10:20 AM, John Fox wrote:

> Is there a platform-independent way to test whether R is running in  
> an English locale?

I suppose the following should work:

Sys.getlocale("LC_CTYPE")=="C" || length(grep("^en",Sys.getlocale 
("LC_CTYPE"),TRUE))>0

Basically unix platforms will have "C" or "en_..."; Windows has  
"English...". You could make the check more strict depending on the  
platform if desired ..

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] call fortran in R

2005-08-04 Thread Simon Urbanek
On Aug 4, 2005, at 1:38 PM, Sébastien Durand wrote:

> Ok,  I am presently updating my system.
>
> How do you set
>   setenv gcc /usr/local/bin/gfortran.

That won't help even if you do it in bash - this is wrong!  
(F77=gfortran is what may help if you want to re-compile R with gcc4).

If you are using CRAN binary, you cannot use gfortran! gcc3 and 4 are  
not compatible.

With stock CRAN binary and the supplied g77 (Tiger on a G5) a simple  
(silly) example:

gammu:urbanek$ cat fts.f
   subroutine ffoo(a)
   double precision a(1)

   a(1) = 1.0d0
   return
   end

gammu:urbanek$ R CMD SHLIB fts.f
g77   -fno-common  -g -O2 -c fts.f -o fts.o
gcc-3.3 -bundle -flat_namespace -undefined suppress -L/usr/local/lib - 
o fts.so fts.o  -L/usr/local/lib/gcc/powerpc-apple-darwin6.8/3.4.2 - 
lg2c -lSystem -lcc_dynamic -framework R

gammu:urbanek$ nm fts.so |grep T
0fec T _ffoo_

R> dyn.load("fts.so")
R> .Fortran("ffoo",as.double(10))
[[1]]
[1] 1

R> is.loaded(symbol.For("ffoo"))
[1] TRUE
R> symbol.For("ffoo")
[1] "ffoo_"

So ... there must be something fundamentally wrong with your setup,  
otherwise I see no reason why you should have any problems. Maybe you  
should  send us the code pus exact transcript of your attempt...

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] extra parentheses cause death of kernel (PR#8094)

2005-08-24 Thread Simon Urbanek
Mickey,

On Aug 24, 2005, at 2:31 PM, [EMAIL PROTECTED] wrote:

> I try to type quickly, and sometimes I make mistakes that cost me more
> than I think they should...  It appears that any time I quickly type a
> sequence such as:
>
> quartz()
>
> I am rewarded with the following text in red:

You're using an outdated version - please update to 2.1.1, that  
problem (and many others) have been fixed in the meantime (c.f.  
posting guide about bugs). Before reporting bugs on R.app you should  
even try the nightly build binary.

As was mentioned several times on the lists, it is highly recommended  
to update from the version you're using, because it had several  
issues that have been fixed shortly afterwards.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Free-form to fixed-form Fortran

2005-08-26 Thread Simon Urbanek
Aleši,

On Aug 26, 2005, at 10:36 AM, Aleš Žiberna wrote:

> I have writen some subrutines in Free-form Fortran. I would like to  
> includ
> them in a package, which I would like to build on WinXP. I have all
> suggested tools/programs for bulding R packages on Windows (except  
> latex).
>
> What is the best way of using these subrutines? Does sombody mybe  
> know any
> translation tools for converting Free-form to fixed-form Fortran?

If your only concern is to build it, you may try using g77 with - 
ffree-form. This should work with GNU Fortran. However, that limits  
the availability of your package to GNU compilers only.

Cheers,
Šimon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RServe initialization problem

2005-08-29 Thread Simon Urbanek
Joel,

On Aug 29, 2005, at 5:57 AM, joelarrais wrote:

> I want to use the R functionalities in my Java program. I found the  
> Rserve
> that appear to suite my requirements but I'm facing some configuration
> problems.
>
> I' following the web page tutorial
> (http://stats.math.uni-augsburg.de/Rserve/doc.shtml) but I face the  
> above
> problem:
>
> C:\Program Files\R\rw2010\bin>R CMD RSERVE
>
> Can't open perl script "C:\PROGRA~1\R\rw2010/bin/RSERVE": No such  
> file or
> directory

You are using Windows, not unix, so should be just running Rserve.exe  
- please see the documentation, it specifically distinguishes Windows  
from unix. You may also consider reading the release notes for the  
Windows version of Rserve that discuss the differences.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] FW: RServe initialization problem

2005-08-29 Thread Simon Urbanek
On Aug 29, 2005, at 6:04 AM, joelarrais wrote:

> I want to use the R functionalities in my Java program. I found the  
> Rserve
> that appear to suite my requirements but I'm facing some configuration
> problems.
> I' following the web page tutorial
> (http://stats.math.uni-augsburg.de/Rserve/doc.shtml) but I face the  
> above
> problem:
>
> C:\Program Files\R\rw2010\bin>R CMD RSERVE
> Can't open perl script "C:\PROGRA~1\R\rw2010/bin/RSERVE": No such  
> file or
> directory

I'm not quite sure why this e-mail was also sent to the R-devel,  
please see the posting guide!

Also a closer look at the documentation you mention would reveal that  
it strictly distinguishes between unix and Windows systems - on the  
latter you have to run the Rserve.exe application instead of using R  
CMD xxx.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] A question on R memory management in .Fortran() calls under Windows

2005-09-09 Thread Simon Urbanek
Simone,

On Sep 9, 2005, at 1:04 PM, Simone Giannerini wrote:

> Dear R community,
>
> I have a question on how R manages memory allocation in .Fortran()  
> calls under Windows.
> In brief, apparently, it is not possible to allocate large matrices  
> inside a Fortran subroutine

I suspect that this is a problem of your compiler, not R, because it  
works without problems for me:

 > dyn.load("foo.so")
 > M=10
 > N=10
 > X=matrix(1,M,N)
 > .Fortran("foo",X,as.integer(M),as.integer(N),S=as.double(0))$S
[1] 100
 > .Fortran("foobis",as.integer(M),as.integer(N),S=as.double(0))$S
[1] 100
 > M=3000
 > N=100
 > X=matrix(1,M,N)
 > .Fortran("foo",X,as.integer(M),as.integer(N),S=as.double(0))$S
[1] 3e+05
 > .Fortran("foobis",as.integer(M),as.integer(N),S=as.double(0))$S
[1] 3e+05
 > M=1
 > N=1
 > X=matrix(1,M,N)
 > .Fortran("foo",X,as.integer(M),as.integer(N),S=as.double(0))$S
[1] 1e+08
 > .Fortran("foobis",as.integer(M),as.integer(N),S=as.double(0))$S
[1] 1e+08

Tested on
PC1: Win XP SP2, AMD64 3000+, 1GB RAM, gfortran 4.1.0 (20050902)
PC2: OS X, G5 1.8, 1GB RAM, gfortran 4.0.1

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] A question on R memory management in .Fortran() calls under Windows

2005-09-12 Thread Simon Urbanek
Simone,

On Sep 12, 2005, at 4:30 AM, Simone Giannerini wrote:

> yes, CVF allocates automatic objects on the stack and apparently  
> there is no way of changing it.

Yes, that's bad news.

> By the way, increasing the stack of the fortran process when  
> linking does not solve the problem

In general the stack size is also governed by the system limits so  
you may need to increase those as well, but still, that won't really  
solve your problem.

>> I'd say your only reasonable workarounds are to tell your compiler to
>> use the heap for the local matrix allocation (if that's possible),  
>> or do
>> your allocations in R.
>>
>
> I might follow the second way, in any case, I am considering  
> switching to Linux, I have also  considered changing compiler under  
> Win,  any suggestions on the choice would be welcomed.

As Duncan was mentioning g77 is your friend if you can convert your  
code to f77. If you don't have that option, you're partially on your  
own. GNU Fortran 95 (gfortran) may be an option as it exists both for  
unix and Windows (although not as a part of MinGW), but R currently  
doesn't provide .f90 target so you'll need to add your own small  
Makevars.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] NUMERIC_POINTER question

2005-09-13 Thread Simon Urbanek
Eryk,

On Sep 13, 2005, at 2:26 PM, nwew wrote:

> printf("%f\n",NUMERIC_POINTER(mat)[1]);
> [...]
> However it prints
> 0.
> if [EMAIL PROTECTED] are integers ( [EMAIL PROTECTED]<-matrix(1:12,3,4) ).
>
> Can anyone explain it to me why?
> I thought that NUMERIC_POINTER makes it clear that i expect  
> datatype numeric.
> (Why otherwise the distinction with INTEGER_POINTER)

You answered your own question - NUMERIC_POINTER expects that the  
SEXP you pass to it is numeric=double. When you use it, it's your  
responsibility to make sure that the SEXP is numeric and not integer  
or anything else. Probably you may want to use AS_NUMERIC to ensure  
that. [btw: NUMERIC_POINTER() is a compatibility macro for REAL() and  
AS_NUMERIC(x) for coerceVector(x,REALSXP)].

Also you should be aware that C uses 0-based indices so  
NUMERIC_POINTER(mat)[1] accesses the 2nd element of the vector.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Project suggestion: Interface between R and graphic devices

2005-10-09 Thread Simon Urbanek
Knut,

On Oct 9, 2005, at 1:19 PM, Knut Krueger wrote:

>> If you prefer SciViews, JGR, R Commander, etc., then you should  
>> expect
>> some delay before R releases and compatible GUI releases. Indeed, one
>> must say that R Commander has always been released on time with the
>> latest R version, as fart as I know, and JGR is very close to.
>>
>
> and this is the point that I thought maybe there could be some   
> improvement with an universal API for GUIs


As Brian was pointing out there is such an API and as with all APIs  
it is allowed to improve between releases. All projects allow this  
including unix UI projects you mentioned. In most cases the update of  
the GUI for a next R version consists of re-compiling the GUI, that's  
all. Again, this is always necessary, independent of the API, for  
various reasons (some of them platform-dependent such as that with  
new R version the location of the R dynamic library may have changed).

The API changes usually add new functionality and hence new  
possibilities. As Philippe was saying, the delays are then not  
because of API changes, but because the GUI authors want to take  
advantage of that new functionality. (In the last 6 releases of R I  
remember only two restructuring changes in the API, both taking less  
than 5 lines of code to adapt to - modulo graphics devices, of course).

I didn't check all the GUIs, but for JGR I can say that you can get  
it immediately at the time of R release if you compile it from  
sources. From my point of view I can say that building and testing of  
installers and binaries takes far more time than fixing the code for  
API changes. If you are willing to build and test binaries during the  
R beta phase, you're encouraged to do so - that would make the GUI  
available right on time for the R release.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Control R from another program (written in Delphi)

2005-10-17 Thread Simon Urbanek
Rainer,

On Oct 17, 2005, at 1:13 PM, Rainer M. Krug wrote:

> At the moment I am using the (D)COM interface but as I would like  
> to run R on a Linux Cluster, thits is not an option any more.
> What is the easiest way of copntrolluing R over the network? I  
> thought  about sockets, but I am a little bit stuck.

We have been using snow+Rpvm very successfully. There are tons of  
other approaches and tools as well (mosix, rmpi, snowFT, ...). I  
would start with snow, because it is very flexible and even offers  
socket-only solution when necessary.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Socks under R

2005-10-19 Thread Simon Urbanek
Rainer,

On Oct 19, 2005, at 3:29 PM, Rainer M. Krug wrote:

> when I use
>
> con1 <- socketConnection(...)
>
> in R and want to send text from another application written in  
> Delphi to  R, do I just have to send the text or do I have to  
> implement more control characters and so on?

Sockets are just reliable bi-directional communication channels (in  
the default mode), so their use is entirely up to you (both on R side  
and other application's side).

> Is
>
> con1 <- socketConnection(port=6011, server=TRUE)
> writeLines("plot(rnorm(100))", con1)
>
> just sending the text in "plot(rnorm(100))" to the socket or is it  
> doing more (R specific protocoll for socks comminication)?

It basically equivalent to using "send" on the socket API level [i.e.  
the above effectively does send(s, "plot(rnorm(100))\n", 17, 0)], so  
it's up to the other side to receive it properly. There is no "R  
specific protocol" - socket connections in R use regular TCP (neither  
red nor white "socks" ;)), so the literature on socket programming is  
your friend (e.g. using readLines(con1) for incoming traffic in your  
example would be a bad idea).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] typo in browse.pkgs

2005-11-03 Thread Simon Urbanek
Günther,

On Nov 3, 2005, at 10:24 AM, G. Sawitzki wrote:

> See below.
>
>   gs.
>
> Error in browse.pkgs("CRAN", "binary") : couldn't find function
> "avaliable.packages"

You seem to have an old version of the R.app, because the R 2.2.0  
release R GUI has a work-around for that problem. Please update your  
R.app either from the R release or the nightly builds page:
http://research.att.com/~urbanek/R/

As Brian was saying, the error was fixed in R immediately after the  
release - strangely enough no one reported the error during the alpha  
and beta cycle although both the GUI and R binaries were available  
for download :(.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Alpha and Beta testing of R versions

2005-11-04 Thread Simon Urbanek
On Nov 4, 2005, at 6:58 AM, Prof Brian Ripley wrote:

> Martin's point is generally very valid, but in the case of the  
> 2.2.0 release remarkably few of the bugs found since release were  
> new in 2.2.0.
> One thing we have learnt is that none of the testers seem to look  
> at HTML help (which accounts for 2 of the 4 2.2.0-only bugs I  
> counted).
>
> What we need most is persistent help in testing each release,  
> especially on unusual platforms.  How do we `incentivize' that?

I suspect that in the particular case of OS X the problem was  
probably visibility - it was the first time ever that nightly OS X  
binaries were available during alpha/beta phase (afaict), but I'm not  
sure how many people knew about it. I think posted about it on R-SIG- 
Mac during some discussion, but maybe I should have announced it more  
specifically somewhere. I'm, not even sure whether there was a link  
from the main page on CRAN. I would think that OS X users are more  
likely to rely on binaries, so the above is more relevant than on  
other platforms.

>> - being listed as helpful person in R's 'THANKS' file
>>  {but that may not entice those who are already listed},
>>  or even in the NEWS of the new relase
>>  or on the "Hall of fame of R beta testers"

The latter sounds good to me, although I'm not sure how many of our  
users are striving for fame ;).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Memory allocation (PR#8304)

2005-11-13 Thread Simon Urbanek
Hans,

this is not a bug! You're simply running out of memory as the message  
tells you (allocating ca. 570MB? That's lot...) . You should consider  
re-phrasing the problem (preferably) or getting more memory and/or  
using 64-bit version of R where applicable.

Cheers,
Simon

On Nov 13, 2005, at 3:47 PM, [EMAIL PROTECTED] wrote:

> Full_Name: Hans Kestler
> Version: 2.2.0
> OS: 10.4.3
> Submission from: (NULL) (84.156.184.101)
>
>
>> sam1.out<-sam(raw1[,2:23],raw1.cl,B=0,rand=124)
>
> We're doing 319770 complete permutations
>
> Error: cannot allocate vector of size 575586 Kb
> R(572,0xa000ed68) malloc: *** vm_allocate(size=589402112) failed  
> (error code=3)
> R(572,0xa000ed68) malloc: *** error: can't allocate region
> R(572,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug
> R(572,0xa000ed68) malloc: *** vm_allocate(size=589402112) failed  
> (error code=3)
> R(572,0xa000ed68) malloc: *** error: can't allocate region
> R(572,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Help finding some code in the R source code

2005-11-13 Thread Simon Urbanek
Greg,

On Nov 13, 2005, at 7:03 PM, Greg Evans wrote:

> I'm trying to write some Python code to check if a string text  
> contains a complete R statement.  I'm hoping someone will be able  
> to point me to the right place in the R source code, so I can use  
> it as a starting point.

What happens internally is that R runs the expression through the  
parser. When the parser returns PARSE_INCOMPLETE then you know that  
the expression is not complete and thus more input is needed. If you  
are directly interfacing R from Python then you can easily do exactly  
the same thing.
However, if you want a solution in pure Python, it may be more  
difficult as it would be equal to re-writing the R parser in  
Python... (or some 'light' version of it..).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] external pointers

2005-12-11 Thread Simon Urbanek
Mark,

On Dec 12, 2005, at 7:48 AM, <[EMAIL PROTECTED]> wrote:

> (i)The first '.C' call uses Delphi code to allocate (using Delphi's  
> own memory manager) and set up a persistent object that R doesn't  
> know about. The Delphi code then returns an "opaque" integer-valued  
> handle to R, which is the address of the object in the Delphi DLL's  
> world.

That's a bad idea for a couple of reasons, the main being that  
integer is not guaranteed to be able to hold a pointer - it won't  
work on any 64-bit platform. Second drawback is that you have no way  
to link the life of the R object to your Delphi object, because there  
is no way gc will tell you that the object is gone. This will lead to  
memory leaks. [Been there, done that ;)] Both issues are solved by  
the external pointers.

> (iii) There is a final cleanup '.C' call which deallocates the  
> persistent object. I sometimes also automate the destruction in the  
> cleanup code of the DLL, just in case the R user forgets to cleanup.

How do you make sure the no one uses the now invalid integer value?  
There can be many copies of your proxy object around and they all  
point to nirvana ...

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] AppleScript commands don't execute until mouse over console window (PR#8405)

2005-12-16 Thread Simon Urbanek
Thanks, Jonathan! I have now committed a slight variation of your  
patch in the current Mac-GUI.

Cheers,
Simon

On Dec 16, 2005, at 7:30 AM, [EMAIL PROTECTED] wrote:

> I am sending commands to R via AppleScript (specifically, from  
> SubEthaEdit to instruct it to source the file I'm currently  
> editing). R doesn't respond to the AppleScript until I move the  
> mouse over R's console window.
>
> The following patch to the Mac GUI (against current svn trunk)  
> resolves the problem, by posting a dummy event to wake up the event  
> queue after the command has been stuffed into the input buffer.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Build error on Mac OS X

2005-12-21 Thread Simon Urbanek
Hervé,

On Dec 21, 2005, at 6:12 PM, Herve Pages wrote:

> I don't get that problem with R-devel daily snapshots from before  
> 2005-12-14
> and I get it with (almost) all snaphots between 2005-12-14 and today.

Strange - I have only failure on 2005/12/17 - all others built fine  
(same system: 7.9.0). Did you try the SVN checkout? I can't test the  
current tar-ball on the Panther machine, because it's running the  
nightly builds right now...

(FWIW those configure parameters are both superfluous on OS X as of R  
2.2.0)

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] cairo anyone?

2005-12-23 Thread Simon Urbanek
Byron,

On Dec 22, 2005, at 8:07 PM, Byron Ellis wrote:

> Has anyone taken a shot at a Cairo graphics device yet?

*opens a drawer* You can try this:
http://www.rosuda.org/R/Cairo_0.1-1.tar.gz

I'm using it for generating bitmap files (PNG), that's why only the  
image-backned is used. It should be easy to add other formats like  
PDF or maybe even other surfaces like Win32, Quartz or XLib, because  
the back-end part is modular, but I didn't bother (yet?).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rserve setSEXP command

2005-12-26 Thread Simon Urbanek
Hi Chris,

On Dec 26, 2005, at 5:57 AM, Chris Burke wrote:

> Can any valid SEXP expression be given in the setSEXP command?
>
> I tried to send a numeric matrix by including the dim attribute  
> with the data, but get an error code 68 (some parameters are  
> invalid). Is the dim attribute not supported by setSEXP? If so,  
> does this mean a matrix should be sent as a list, then a dim  
> command sent in a second step?

It's a combination of a bug and missing feature. The bug is that the  
attribute of an expression is not decoded at all. The missing feature  
is that (dotted-pair) lists are not supported in decode, so you can't  
pass an attribute anyway, because they are stored in dotted-pair  
lists. So, for now, yes, you have to assign the names in a separate  
step - I'll need to fix that ... I'll keep you posted.

Thanks for spotting this!

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Q: R 2.2.1: Memory Management Issues?

2006-01-05 Thread Simon Urbanek
Karen,

On Jan 5, 2006, at 5:18 PM, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:

> I am trying to run a R script which makes use of the MCLUST package.
> The script can successfully read in the approximately 17000 data  
> points ok, but then throws an error:
> 
> Error:  cannot allocate vector of size 1115070Kb

This is 1.1GB of RAM to allocate alone for one vector(!). As you  
stated yourself the total upper limit is 2GB, so you cannot even fit  
two of those in memory anyway - not much you can do with it even if  
it is allocated.

> summary(EMclust(y),y)

I suspect that memory is your least problem. Did you even try to run  
EMclust on a small subsample? I suspect that if you did, you would  
figure out that what you are trying to do is not likely to terminate  
within days...

> (1) I had initially thought that Windows 2000 should be able to  
> allocate up to about 2 GB memory.  So, why is there a problem to  
> allocate a little over 1GB on a defragmented disk with over 15 GB  
> free?  (Is this a pagefile size issue?)

Because that is not the only 1GB vector that is allocated. Your "15GB/ 
defragmented" are irrelevant - if at all, look how much virtual  
memory is set up in you system's preferences.

> (2) Do you think the origin of the problem is
> (a) the R environment, or
> (b) the function in the MCLUST package using an in-memory  
> instead of an on-disk approach?

Well, a toy example of 17000x2 needs 2.3GB and it's unlikely to  
terminate anytime soon, so I'd rather call it shooting with the wrong  
gun. Maybe you should consider different approach to your problem -  
possibly ask at the BioConductor list, because people there have more  
experience with large data and this is not really a technical  
question about R, but rather how to apply statistical methods.

> (3)
> (a) If the problem originates in the R environment, would  
> switching to the Linux version of R solve the problem?

Any reasonable unix will do - technically (64-bit versions  
preferably, but in your case even 32-bit would do). Again, I don't  
think memory is your only problem here, though.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Q: R 2.2.1: Memory Management Issues?

2006-01-05 Thread Simon Urbanek
On Jan 5, 2006, at 7:33 PM, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:

> The empirically derived limit on my machine (under R 1.9.1) was  
> approximately 7500 data points.
> I have been able to successfully run the script that uses package  
> MCLUST on several hundred smaller data sets.
>
> I even had written a work-around for the case of greater than 9600  
> data points.  My work-around first orders the
> points by their value then takes a sample (e.g. every other point  
> or 1 point every n points) in order to bring the number under  
> 9600.  No problems with the computations were observed, but you are  
> correct that a deconvolution on that larger dataset of 9600 takes  
> almost 30 minutes.  However, for our purposes, we do not have many  
> datasets over 9600 so the time is not a major constraint.
>
> Unfortunately, my management does not like using a work-around and  
> really wants to operate on the larger data sets.
> I was told to find a way to make it operate on the larger data sets  
> or avoid using R and find another solution.

Well, sure, if your only concern is the memory then moving to unix  
will give you several hundred more data points you can use. I would  
recommend a  64-bit unix preferably, because then there is  
practically no software limit on the size of virtual memory.  
Nevertheless there is still a limit of ca. 4GB for a single vector,  
so that should give you around 32500 rows that mclust can handle as- 
is (I don't want to see the runtime, though ;)). For anything else  
you'll really have to think about another approach..

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] minor build problem

2006-01-07 Thread Simon Urbanek
Cyrus,

thanks for the report.

On Jan 7, 2006, at 1:41 PM, Cyrus Harmon wrote:

> I'm trying to build from the latest SVN sources on Mac OS X 10.4.3
> and I seem to be having a problem making the documentation.
>
> When I do make install, i get the following:
>
> ([EMAIL PROTECTED]):~/src/R/r-devel/build-f95$ make install
> make[1]: Nothing to be done for `front-matter'.
> SVN-REVISION is unchanged
> make[1]: Nothing to be done for `install'.
> make[1]: Nothing to be done for `install'.
> installing doc ...
> /sw/bin/install: cannot stat `R.1': No such file or directory
> make[1]: *** [install-man] Error 1
> make: *** [install] Error 1

Yes, if you run "make; make; make install" it goes away. It's a known  
problem caused by a recent change in the R start script and the fix  
should be committed shortly ...

Cheers,
Simon


[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] prod(numeric(0)) surprise

2006-01-09 Thread Simon Urbanek
On Jan 9, 2006, at 3:35 PM, Kjetil Halvorsen wrote:

> But this thread seems to have pointed to some inconsistencies:
>
>> cumprod( numeric(0) )
> numeric(0)
>> cumsum( numeric(0) )
> numeric(0)
>
> shouldn't this give the same as prod() and sum() in this case?

No - as Thomas explained very nicely they are a different kind of  
functions. cumXXX are n-to-n length vector functions, whereas prod/ 
sum are n to 1 vector length functions. So R is in fact very  
consistent and Thomas did exactly describe the rules.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Interfacing with user in R

2006-01-10 Thread Simon Urbanek
On Jan 10, 2006, at 4:34 PM, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:

> I am new in R programming (my question may sound trivial to you):  
> is there any way to ask the user to enter a string within an R  
> process, say a filename, make R to recognise it and open the given  
> file?

Sure, for reading you can use readLines. I have no idea what you mean  
by "recognise", but let's say if you wanted to use read.table, it  
would be
read.table(readLines(n=1))

Of course, in R there is an even more convenient method far that ;)
read.table(file.choose())

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Provide both shlib and standard versions of R?

2006-01-15 Thread Simon Urbanek

On Jan 15, 2006, at 11:21 PM, Bo Peng wrote:

> To operate R from python via a Python package rpy, R has to be  
> compiled with --enable-R-shlib.  This is troublesome since none of  
> the binary distributions (except for windows?) is built with this  
> option


That is not true, almost all binaries come with R as shared library -  
it is in fact the default on Mac OS X and Windows. Most Linux  
distributions provide a shared library binary as well.

>  so rpy users have to build R from source. This can be quite a  
> challenge, especially on platforms like macOSX.
>

I guess you didn't even try it, because on OS X it *is* the default!

Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] saving PDF with multiple diagrams results in crash (PR#8569)

2006-02-06 Thread Simon Urbanek

On Feb 6, 2006, at 1:11 PM, [EMAIL PROTECTED]  
wrote:

> Full_Name: Alexander Holzbach
> Version: 2.2.0
> OS: Mac OS X 10.3.9
> Submission from: (NULL) (129.13.186.1)
>
>
> when i build an area with multiple diagrams (par(mfrow=c(1,3)) )  
> and try to save
> this to a pdf via "save as.." or by setting pdf("filename") r crashes
> reproducable.

Firstly, please update at least to R 2.2.1 before reporting bugs  
(preferably to R-patched). Secondly, can you, please, send us exactly  
the code you use (and/or exactly the steps you take)? I'm not able to  
reproduce it in R 2.2.1 from your brief description.

Thanks,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] invalid graphics state using dev.print

2006-02-06 Thread Simon Urbanek
Paul,

On Feb 6, 2006, at 5:24 PM, Paul Roebuck wrote:

> Tried on R-Sig-Mac with no responses, but I need some kind of answer.
> [...]
> Does the following work on your system?

Interesting, no, it doesn't either. For png and pdf I use Quartz +  
quartz.save (it produces much nicer results) so I didn't really  
notice, but you're right. First I thought those graphics state issues  
are specific to the Quartz device, but you have proven that it's not.  
It's in fact not even Mac-specific - I have just reproduced it on a  
Linux box - that's why I'm moving this to R-devel.

Here is a small reproducible example:
x11()
plot(rnorm(10))
dev.print(png)

I'll try to have a look at it later today, but I can't promise anything.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] bugzilla issues

2018-01-26 Thread Simon Urbanek
Thanks, fixed.
Simon


> On Jan 26, 2018, at 3:40 AM, Uwe Ligges  
> wrote:
> 
> Simon,
> 
> can you take a look, please?
> 
> Best,
> Uwe
> 
> 
> 
> On 26.01.2018 01:41, Ben Bolker wrote:
>>  tl;dr is the R bug tracker down or am I being an idiot? Help please ...
>> 
>>  I decided I would follow up on
>> https://stat.ethz.ch/pipermail/r-devel/2018-January/075410.html
>> (reporting/suggesting a patch for an issue in stats::mantelhaen.test()
>> with large data sets)
>>   Reading the instructions at https://www.r-project.org/bugs.html
>> suggests that if I'm sure I have found something worthy of posting to
>> the R bug tracker, I should go to https://bugs.r-project.org/bugzilla3
>> This results in
>> 
>> Software error:
>> Can't locate Email/Sender/Simple.pm in @INC (you may need to install the
>> Email::Sender::Simple module) (@INC contains: .
>> lib/x86_64-linux-gnu-thread-multi lib /etc/perl
>> /usr/local/lib/x86_64-linux-gnu/perl/5.24.1 /usr/local/share/perl/5.24.1
>> /usr/lib/x86_64-linux-gnu/perl5/5.24 /usr/share/perl5
>> /usr/lib/x86_64-linux-gnu/perl/5.24 /usr/share/perl/5.24
>> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at
>> Bugzilla/Mailer.pm line 26.
>> BEGIN failed--compilation aborted at Bugzilla/Mailer.pm line 26.
>> Compilation failed in require at Bugzilla/Auth.pm line 22.
>> BEGIN failed--compilation aborted at Bugzilla/Auth.pm line 22.
>> Compilation failed in require at Bugzilla.pm line 23.
>> BEGIN failed--compilation aborted at Bugzilla.pm line 23.
>> Compilation failed in require at /var/lib/bugzilla4/index.cgi line 15.
>> BEGIN failed--compilation aborted at /var/lib/bugzilla4/index.cgi line 15.
>> For help, please send mail to the webmaster (ad...@rforge.net), giving
>> this error message and the time and date of the error.
>> ===
>> e-mailing ad...@rforge.net bounces with
>> This is the mail system at host hagal.urbanek.info.
>> I'm sorry to have to inform you that your message could not
>> be delivered to one or more recipients. It's attached below.
>> For further assistance, please send mail to postmaster.
>> If you do so, please include this problem report. You can
>> delete your own text from the attached returned message.
>>The mail system
>> : User unknown in virtual alias table
>> =
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] CRAN indices out of whack (for at least macOS)

2018-01-31 Thread Simon Urbanek
Dirk,

yes, thanks, the edge server that serves the Mac binaries to CRAN has run out 
of disk space (due to size of CRAN itself) so the sync was incomplete.
It is fixed now -- you can try by using the macos master server as mirror: 
https://r.research.att.com/ and it will propagate through other mirrors as 
usual.

Thanks,
Simon




> On Jan 31, 2018, at 1:34 PM, Dirk Eddelbuettel  wrote:
> 
> 
> Bumping this as we now have two more issue tickets filed and a fresh SO
> question.
> 
> Is anybody looking at this? Simon?
> 
> Dirk
> 
> On 30 January 2018 at 15:19, Dirk Eddelbuettel wrote:
> | 
> | I have received three distinct (non-)bug reports where someone claimed a
> | recent package of mine was broken ... simply because the macOS binary was 
> not
> | there.
> | 
> | Is there something wrong with the cronjob providing the indices? Why is it
> | pointing people to binaries that do not exist?
> | 
> | Concretely, file
> | 
> |   https://cloud.r-project.org/bin/macosx/el-capitan/contrib/3.4/PACKAGES
> | 
> | contains
> | 
> |   Package: digest
> |   Version: 0.6.15
> |   Title: Create Compact Hash Digests of R Objects
> |   Depends: R (>= 2.4.1)
> |   Suggests: knitr, rmarkdown
> |   Built: R 3.4.3; x86_64-apple-darwin15.6.0; 2018-01-29 05:21:06 UTC; unix
> |   Archs: digest.so.dSYM
> | 
> | yet the _same directory_ only has:
> | 
> |   digest_0.6.14.tgz 15-Jan-2018 21:36   157K
> | 
> | I presume this is a temporary accident.
> | 
> | We are all spoiled by you all providing such a wonderfully robust and
> | well-oiled service---so again big THANKS for that--but today something is 
> out
> | of order.
> | 
> | Dirk
> | 
> | -- 
> | http://dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | 
> | __
> | R-devel@r-project.org mailing list
> | https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> -- 
> http://dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] readLines function with R >= 3.5.0

2018-05-29 Thread Simon Urbanek
The MySQL DB on the server died - not sure why. Restarted and it should be ok.


> On May 29, 2018, at 9:17 AM, Martin Maechler  
> wrote:
> 
>> Ralf Stubner 
>>on Tue, 29 May 2018 11:21:28 +0200 writes:
> 
>> On 28.05.2018 16:38, Martin Maechler wrote:
>>> Then, I now do think this needs to be dealt with as a bug
>>> (but I'm not delving into fixing it!)
> 
>> Ok. Can somebody with write privileges in bugzilla add the
>> bug report? I can also do this myself, if somebody with
>> the required privileges can create a user for me.
> 
>> Greetings Ralf
> 
> << PS: I get an error message from   
> https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.r-2Dproject.org_bugzilla3_&d=DwIDAw&c=LFYZ-o9_HUMeMTSQicvjIg&r=wCho7riGUuXpdLf26yPBz3JyaZUEU6lK6CD_m7-_CA8&m=_xu-PG0SXp13I6r_aA_W4Q0HuUKSUyN_7nwKZIVCqbs&s=L6IIRWOHJ6EkANE7wuzf8l4cEORIkFhwi-SE5i_iqII&e=.
> 
> Yes, it is currently "down", i.e., in a wrong state.
> I had alerted the owner of the server a few hours ago, but as
> that is in California, it may need another few hours
> before one of the  R Core members can add an account for you on
> R bugzilla.
> 
> Best, Martin Maechler
> 
>> -- 
>> Ralf Stubner Senior Software Engineer / Trainer
> 
>> daqana GmbH Dortustraße 48 14467 Potsdam
> 
>> T: +49 331 23 61 93 11 F: +49 331 23 61 93 90 M: +49 162
>> 20 91 196 Mail: ralf.stub...@daqana.com
> 
>> Sitz: Potsdam Register: AG Potsdam HRB 27966 P Ust.-IdNr.:
>> DE300072622 Geschäftsführer: Prof. Dr. Dr. Karl-Kuno Kunze
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Bugzilla down?

2019-02-25 Thread Simon Urbanek
I do. The server ran out of disk earlier today and it seems that it killed 
bugzilla somehow. I'll have a look.
Thanks,
Simon


> On Feb 25, 2019, at 2:07 PM, Gabriel Becker  wrote:
> 
> Hi Martin (who I believe manages bz?) et al.,
> 
> I'm getting 503 Service Unavailable from bugzilla currently (
> https://bugs.r-project.org/bugzilla/ and direct links to specific bugs,
> both). Is this a known issue?
> 
> Thanks,
> ~G
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Bugzilla down?

2019-02-25 Thread Simon Urbanek
Ok, fixed.
Simon



> On Feb 25, 2019, at 2:45 PM, Simon Urbanek  
> wrote:
> 
> I do. The server ran out of disk earlier today and it seems that it killed 
> bugzilla somehow. I'll have a look.
> Thanks,
> Simon
> 
> 
>> On Feb 25, 2019, at 2:07 PM, Gabriel Becker  wrote:
>> 
>> Hi Martin (who I believe manages bz?) et al.,
>> 
>> I'm getting 503 Service Unavailable from bugzilla currently (
>> https://bugs.r-project.org/bugzilla/ and direct links to specific bugs,
>> both). Is this a known issue?
>> 
>> Thanks,
>> ~G
>> 
>>  [[alternative HTML version deleted]]
>> 
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Exit status of Rscript

2019-02-28 Thread Simon Urbanek
> system2("Rscript", c("-e", shQuote("stop('foo')"))) == 0
Error: foo
Execution halted
[1] FALSE
> sessionInfo()
R version 3.5.2 (2018-12-20)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: OS X El Capitan 10.11.6

> system2("Rscript", c("-e", shQuote("stop('foo')"))) == 0
Error: foo
Execution halted
[1] FALSE
> sessionInfo()
R Under development (unstable) (2019-02-27 r76167)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: OS X El Capitan 10.11.6

You may also want to check that you run what you think you do in the shell:

$ Rscript -e 'print(R.version.string); stop("foo")'; echo $?
[1] "R Under development (unstable) (2019-02-27 r76167)"
Error: foo
Execution halted
1

$ Rscript -e 'print(R.version.string); stop("foo")'; echo $?
[1] "R version 3.5.2 (2018-12-20)"
Error: foo
Execution halted
1

$ Rscript -e 'print(R.version.string); stop("foo")'; echo $?
[1] "R version 3.4.4 Patched (2018-03-19 r75535)"
Error: foo
Execution halted
1


> On Feb 28, 2019, at 7:23 AM, Michel Lang  wrote:
> 
> Current R release (3.5.2) and devel return a 0 exit status on error,
> while prior versions returned a non-zero exit status. On Linux and
> MacOs, the following line returns TRUE for R-3.5.2 and R-devel, and
> FALSE for R-3.5.1 and R-3.5.0:
> 
> system2("Rscript", c("-e", shQuote("stop('foo')"))) == 0
> 
> I didn't find this in the NEWS, so I believe this is a bug.
> 
> Best,
> Michel
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Exit status of Rscript when setting options(error=utils::recover)

2019-03-18 Thread Simon Urbanek
As Tomas pointed out, it may be helpful to read the R documentation. The error 
option expects a function, so I suppose you intended something like
options(error=function() {recover(); q(status=1)})
which corresponds to calling dump.frames()

Cheers,
Simon


> On Mar 17, 2019, at 23:44, comic fans  wrote:
> 
> Thanks for explanation,  so recover in non-interactive session exit
> with 0 is expected behavior .
> dump.frames said that it always write to file (workspace , or specified file).
> I have a R script run as a auto build stage, so I want to print detail
> backtrace to console
> (with source file, line number)  for quickly debug, without saving any dump.
> I tried
> 
> options(error= quote({utils::recover;q(status=1)}))
> 
> it do exit with 1 when script has error, but it only shows a stripped
> call trace like
> 
> Calls: a ... a -> a -> a -> a -> a -> a -> a -> a -> a -> a -> apply
> 
> instead of
> ...
> 99: rec.R#5: a(v, depth - 1)
> 100: rec.R#5: a(v, depth - 1)
> 101: rec.R#5: a(v, depth - 1)
> 102: rec.R#5: a(v, depth - 1)
> 103: rec.R#5: a(v, depth - 1)
> 
> How can I resolve this ? Thanks for advise
> 
> 
> On Fri, Mar 15, 2019 at 10:10 PM Tomas Kalibera
>  wrote:
>> 
>> 
>> Please refer to the documentation (?stop, ?recover, ?dump.frames). In
>> non-interactive use, recover() works as dump.frames(). dump.frames() is
>> documented not to quit R, and the examples show how to quit the R
>> session with a given status automatically after dump.frames(). So in
>> line with the documentation, R continues after the error, it reaches the
>> end of the input, and returns 0.
>> 
>> When you run the example with the NULL default error handler (not
>> setting the error option), the exit status is 1 as documented in ?stop.
>> 
>> To avoid surprise wrt to the exit status or where execution continues,
>> it is best not to set default error handlers in non-interactive use (or
>> set them so that they exit the session with a given exit status).
>> 
>> Tomas
>> 
>> On 3/10/19 4:15 AM, comic fans wrote:
>>> Hello, I've noticed that Rscript didn't exit with error code if I set
>>> options error = utils::recover in .Rprofile . for example
>>> 
>>> Rscript -e "asdf"
>>> 
>>> Error: object 'asdf' not found
>>> No suitable frames for recover()
>>> 
>>> echo $?
>>> 0
>>> 
>>> if didn't set options in .Rprofile, Rscript exit with error code 1, is
>>> this expected behavior ?
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
>> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Use of C++ in Packages

2019-03-29 Thread Simon Urbanek
Jim,

I think the main point of Tomas' post was to alert R users to the fact that 
there are very serious issues that you have to understand when interfacing R 
from C++. Using C++ code from R is fine, in many cases you only want to access 
R data, use some library or compute in C++ and return results. Such use-cases 
are completely fine in C++ as they don't need to trigger the issues mentioned 
and it should be made clear that it was not what Tomas' blog was about.

I agree with Tomas that it is safer to give an advice to not use C++ to call R 
API since C++ may give a false impression that you don't need to know what 
you're doing. Note that it is possible to avoid longjmps by using 
R_ExecWithCleanup() which can catch any longjmps from the called function. So 
if you know what you're doing you can make things work. I think the issue here 
is not necessarily lack of tools, it is lack of knowledge - which is why I 
think Tomas' post is so important.

Cheers,
Simon


> On Mar 29, 2019, at 11:19 AM, Jim Hester  wrote:
> 
> First, thank you to Tomas for writing his recent post[0] on the R
> developer blog. It raised important issues in interfacing R's C API
> and C++ code.
> 
> However I do _not_ think the conclusion reached in the post is helpful
>> don’t use C++ to interface with R
> 
> There are now more than 1,600 packages on CRAN using C++, the time is
> long past when that type of warning is going to be useful to the R
> community.
> 
> These same issues will also occur with any newer language (such as
> Rust or Julia[1]) which uses RAII to manage resources and tries to
> interface with R. It doesn't seem a productive way forward for R to
> say it can't interface with these languages without first doing
> expensive copies into an intermediate heap.
> 
> The advice to avoid C++ is also antithetical to John Chambers vision
> of first S and R as a interface language (from Extending R [2])
> 
>> The *interface* principle has always been central to R and to S
> before. An interface to subroutines was _the_ way to extend the first
> version of S. Subroutine interfaces have continued to be central to R.
> 
> The book also has extensive sections on both C++ (via Rcpp) and Julia,
> so clearly John thinks these are legitimate ways to extend R.
> 
> So if 'don't use C++' is not realistic and the current R API does not
> allow safe use of C++ exceptions what are the alternatives?
> 
> One thing we could do is look how this is handled in other languages
> written in C which also use longjmp for errors.
> 
> Lua is one example, they provide an alternative interface;
> lua_pcall[3] and lua_cpcall[4] which wrap a normal lua call and return
> an error code rather long jumping. These interfaces can then be safely
> wrapped by RAII - exception based languages.
> 
> This alternative error code interface is not just useful for C++, but
> also for resource cleanup in C, it is currently non-trivial to handle
> cleanup in all the possible cases a longjmp can occur (interrupts,
> warnings, custom conditions, timeouts any allocation etc.) even with R
> finalizers.
> 
> It is past time for R to consider a non-jumpy C interface, so it can
> continue to be used as an effective interface to programming routines
> in the years to come.
> 
> [0]: 
> https://developer.r-project.org/Blog/public/2019/03/28/use-of-c---in-packages/
> [1]: https://github.com/JuliaLang/julia/issues/28606
> [2]: https://doi.org/10.1201/9781315381305
> [3]: http://www.lua.org/manual/5.1/manual.html#lua_pcall
> [4]: http://www.lua.org/manual/5.1/manual.html#lua_cpcall
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Use of C++ in Packages

2019-03-29 Thread Simon Urbanek
Kevin,


> On Mar 29, 2019, at 17:01, Kevin Ushey  wrote:
> 
> I think it's also worth saying that some of these issues affect C code
> as well; e.g. this is not safe:
> 
>FILE* f = fopen(...);
>Rf_eval(...);
>fclose(f);
> 

I fully agree, but developers using C are well aware of the necessity of 
handling lifespan of objects explicitly, so at least there are no surprises.


> whereas the C++ equivalent would likely handle closing of the file in the 
> destructor. In other words, I think many users just may not be cognizant of 
> the fact that most R APIs can longjmp, and what that implies for cleanup of 
> allocated resources. R_alloc() may help solve the issue specifically for 
> memory allocations, but for any library interface that has a 'open' and 
> 'close' step, the same sort of issue will arise.
> 

Well, I hope that anyone writing native code in package is well aware of that 
and will use an external pointer with finalizer to clean up native objects in 
any 3rd party library that are created during the call.


> What I believe we should do, and what Rcpp has made steps towards, is make it 
> possible to interact with some subset of the R API safely from C++ contexts. 
> This has always been possible with e.g. R_ToplevelExec() and 
> R_ExecWithCleanup(), and now things are even better with R_UnwindProtect(). 
> In theory, as a prototype, an R package could provide a 'safe' C++ interface 
> to the R API using R_UnwindProtect() and friends as appropriate, and client 
> packages could import and link to that package to gain access to the 
> interface. Code generators (as Rcpp Attributes does) can handle some of the 
> pain in these interfaces, so that users are mostly insulated from the nitty 
> gritty details.
> 

I agree that we should strive to provide tools that make it safer, but note 
that it still requires participation of the users - they have to use such 
facilities or else they hit the same problem. So we can only fix this for the 
future, but let's start now.


> I agree that the content of Tomas's post is very helpful, especially since I 
> expect many R programmers who dip their toes into the C++ world are not aware 
> of the caveats of talking to R from C++. However, I don't think it's helpful 
> to recommend "don't use C++"; rather, I believe the question should be, "what 
> can we do to make it possible to easily and safely interact with R from 
> C++?". Because, as I understand it, all of the problems raised are solvable: 
> either through a well-defined C++ interface, or through better education.
> 

I think the recommendation would be different if such tools existed, but they 
don't. It was based on the current reality which is not so rosy.  Apparently 
the post had its effect of mobilizing C++ proponents to do something about it, 
which is great, because if this leads to some solution, the recommendation in 
the future may change to "use C++ using tools XYZ".


> I'll add my own opinion: writing correct C code is an incredibly difficult 
> task. C++, while obviously not perfect, makes things substantially easier 
> with tools like RAII, the STL, smart pointers, and so on. And I strongly 
> believe that C++ (with Rcpp) is still a better choice than C for new users 
> who want to interface with R from compiled code.
> 

My take is that Rcpp makes the interface *look* easier, but you still have to 
understand more about the R API that you think. Hence it much easier to write 
buggy code. Personally, that's why I don't like it (apart from the code bloat), 
because things are hidden that will get you into trouble, whereas using the C 
API is at least very clear - you have to understand what it's doing when you 
use it. That said, I'm obviously biased since I know a lot about R internals ;) 
so this doesn't necessarily generalize.


> tl;dr: I (and I think most others) just wish the summary had a more positive 
> outlook for the future of C++ with R.
> 

Well, unless someone actually takes the initiative there is no reason to 
believe in a bright future of C++. As we have seen with the lack of adoption of 
CXXR (which I thought was an incredible achievement), not enough people seem to 
really care about C++. If that is not true, then let's come out of hiding, get 
together and address it (it seems that this thread is a good start).

Cheers,
Simon



> Best,
> Kevin
> 
> On Fri, Mar 29, 2019 at 10:16 AM Simon Urbanek
>  wrote:
>> 
>> Jim,
>> 
>> I think the main point of Tomas' post was to alert R users to the fact that 
>> there are very serious issues that you have to understand when interfacing R 
>> from C++. Using C++ code from R is fine, in many cases you

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2019-04-12 Thread Simon Urbanek
I fully agree with Kevin. Front-ends can always use pthread_atfork() to close 
descriptors and suspend threads in children.

Anyone who thinks you can use PSOCK clusters has obviously not used mclappy() 
in real applications - trying to save the workspace and restore it in 20 new 
processes is not only incredibly wasteful (no shared memory whatsoever) but 
slow. If you want to use PSOCK just do it (I never do - you might as well just 
use a full cluster instead), multicore is for the cases where you want to 
parallelize something quickly and it works really well for that purpose.

I'd like to separate the issues here - the fact that RStudio has issues is 
really not R's fault - there is no technical reason why it shouldn't be able to 
handle it correctly. That is not to say that there are cases where fork() is 
dangerous, but in most cases it's not and the benefits outweigh
the risk.

That said, I do acknowledge the idea of having an ability to prevent forking if 
desired - I think that's a good idea, in particular if there is a standard that 
packages can also adhere to it (yes, there are also packages that use fork() 
explicitly). I just think that the motivation is wrong (i.e., I don't think it 
would be wise for RStudio to prevent parallelization by default).

Also I'd like to point out that the main problem came about when packages 
started using parallel implicitly - the good citizens out there expose it as a 
parameter to the user, but not all packages do it which means you can hit 
forked code without knowing it. If you use mclapply() in user code, you 
typically know what you're doing, but if a package author does it for you, it's 
a different story.

Cheers,
Simon


> On Apr 12, 2019, at 21:50, Kevin Ushey  wrote:
> 
> I think it's worth saying that mclapply() works as documented: it
> relies on forking, and so doesn't work well in environments where it's
> unsafe to fork. This is spelled out explicitly in the documentation of
> ?mclapply:
> 
> It is strongly discouraged to use these functions in GUI or embedded
> environments, because it leads to several processes sharing the same
> GUI which will likely cause chaos (and possibly crashes). Child
> processes should never use on-screen graphics devices.
> 
> I believe the expectation is that users who need more control over the
> kind of cluster that's used for parallel computations would instead
> create the cluster themselves with e.g. `makeCluster()` and then use
> `clusterApply()` / `parLapply()` or other APIs as appropriate.
> 
> In environments where forking works, `mclapply()` is nice because you
> don't need to think -- the process is forked, and anything available
> in your main session is automatically available in the child
> processes. This is a nice convenience for when you know it's safe to
> fork R (and know what you're doing is safe to do within a forked
> process). When it's not safe, it's better to prefer the other APIs
> available for computation on a cluster.
> 
> Forking can be unsafe and dangerous, but it's also convenient and
> sometimes that convenience can outweigh the other concerns.
> 
> Finally, I want to add: the onus should be on the front-end to work
> well with R, and not the other way around. I don't think it's fair to
> impose extra work / an extra maintenance burden on the R Core team for
> something that's already clearly documented ...
> 
> Best,
> Kevin
> 
> 
> On Fri, Apr 12, 2019 at 6:04 PM Travers Ching  wrote:
>> 
>> Hi Inaki,
>> 
>>> "Performant"... in terms of what. If the cost of copying the data
>>> predominates over the computation time, maybe you didn't need
>>> parallelization in the first place.
>> 
>> Performant in terms of speed.  There's no copying in that example
>> using `mclapply` and so it is significantly faster than other
>> alternatives.
>> 
>> It is a very simple and contrived example, but there are lots of
>> applications that depend on processing of large data and benefit from
>> multithreading.  For example, if I read in large sequencing data with
>> `Rsamtools` and want to check sequences for a set of motifs.
>> 
>>> I don't see why mclapply could not be rewritten using PSOCK clusters.
>> 
>> Because it would be much slower.
>> 
>>> To implement copy-on-write, Linux overcommits virtual memory, and this
>>> is what causes scripts to break unexpectedly: everything works fine,
>>> until you change a small unimportant bit and... boom, out of memory.
>>> And in general, running forks in any GUI would cause things everywhere
>>> to break.
>> 
>>> I'm not sure how did you setup that, but it does complete. Or do you
>>> mean that you ran out of memory? Then try replacing "x" with, e.g.,
>>> "x+1" in your mclapply example and see what happens (hint: save your
>>> work first).
>> 
>> Yes, I meant that it ran out of memory on my desktop.  I understand
>> the limits, and it is not perfect because of the GUI issue you
>> mention, but I don't see a better alternative in terms of speed.
>> 
>> Rega

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2019-04-13 Thread Simon Urbanek
Sure, but that a completely bogus argument because in that case it would fail 
even more spectacularly with any other method like PSOCK because you would 
*have to* allocate n times as much memory so unlike mclapply it is guaranteed 
to fail. With mclapply it is simply much more efficient as it will share memory 
as long as possible. It is rather obvious that any new objects you create can 
no longer be shared as they now exist separately in each process.

Cheers,
Simon



> On Apr 13, 2019, at 06:05, Iñaki Ucar  wrote:
> 
> On Sat, 13 Apr 2019 at 03:51, Kevin Ushey  wrote:
>> 
>> I think it's worth saying that mclapply() works as documented
> 
> Mostly, yes. But it says nothing about fork's copy-on-write and memory
> overcommitment, and that this means that it may work nicely or fail
> spectacularly depending on whether, e.g., you operate on a long
> vector.
> 
> -- 
> Iñaki Úcar
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2019-04-13 Thread Simon Urbanek



> On Apr 13, 2019, at 16:56, Iñaki Ucar  wrote:
> 
> On Sat, 13 Apr 2019 at 18:41, Simon Urbanek  
> wrote:
>> 
>> Sure, but that a completely bogus argument because in that case it would 
>> fail even more spectacularly with any other method like PSOCK because you 
>> would *have to* allocate n times as much memory so unlike mclapply it is 
>> guaranteed to fail. With mclapply it is simply much more efficient as it 
>> will share memory as long as possible. It is rather obvious that any new 
>> objects you create can no longer be shared as they now exist separately in 
>> each process.
> 
> The point was that PSOCK fails and succeeds *consistently*,
> independently of what you do with the input in the function provided.
> I think that's a good property.
> 

So does parallel. It is consistent. If you do things that use too much memory 
you will consistently fail. That's a pretty universal rule, there is nothing 
probabilistic about it. It makes no difference if it's PSOCK, multicore, or 
anything else.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Background R session on Unix and SIGINT

2019-04-30 Thread Simon Urbanek
Interrupts are not synchronous in R - the signal only flags the request for 
interruption. Nothing actually happens until R_CheckUserInterrupt() is called 
at an interruptible point. In you case your code is apparently not calling 
R_CheckUserInterrupt() until later as a side-effect of the next evaluation.

Cheers,
Simon


> On Apr 30, 2019, at 3:44 PM, Gábor Csárdi  wrote:
> 
> Hi All,
> 
> I realize that this is not a really nice reprex, but anyone has an
> idea why a background R session would "remember" an interrupt (SIGINT)
> on Unix?
> 
> rs <- callr::r_session$new()
> rs$interrupt() # just sends a SIGINT
> #> [1] TRUE
> 
> rs$run(function() 1+1)
> #> Error: interrupt
> 
> rs$run(function() 1+1)
> #> [1] 2
> 
> It seems that the main loop somehow stores the SIGINT it receives
> while it is waiting on stdin, and then it triggers it when some input
> comes in Maybe. Just speculating
> 
> Thanks,
> Gabor
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Background R session on Unix and SIGINT

2019-04-30 Thread Simon Urbanek
Can you give an example without callr? The key is how is the process stated and 
what it is doing which is entirely opaque in callr.

Windows doesn't have signals, so the process there is entirely different. Most 
of the WIN32 processing is event-based.

Cheers,
Simon


> On Apr 30, 2019, at 4:17 PM, Gábor Csárdi  wrote:
> 
> Yeah, I get that they are async.
> 
> What happens is that the background process is not doing anything when
> the process gets a SIGINT. I.e. the background process is just
> listening on its standard input.
> 
> AFAICT for an interactive process such a SIGINT is just swallowed,
> with a newline outputted to the terminal.
> 
> But apparently, for this background process, it is not swallowed, and
> it is triggered later. FWIW it does not happen on Windows, not very
> surprisingly.
> 
> Gabor
> 
> On Tue, Apr 30, 2019 at 9:13 PM Simon Urbanek
>  wrote:
>> 
>> Interrupts are not synchronous in R - the signal only flags the request for 
>> interruption. Nothing actually happens until R_CheckUserInterrupt() is 
>> called at an interruptible point. In you case your code is apparently not 
>> calling R_CheckUserInterrupt() until later as a side-effect of the next 
>> evaluation.
>> 
>> Cheers,
>> Simon
>> 
>> 
>>> On Apr 30, 2019, at 3:44 PM, Gábor Csárdi  wrote:
>>> 
>>> Hi All,
>>> 
>>> I realize that this is not a really nice reprex, but anyone has an
>>> idea why a background R session would "remember" an interrupt (SIGINT)
>>> on Unix?
>>> 
>>> rs <- callr::r_session$new()
>>> rs$interrupt() # just sends a SIGINT
>>> #> [1] TRUE
>>> 
>>> rs$run(function() 1+1)
>>> #> Error: interrupt
>>> 
>>> rs$run(function() 1+1)
>>> #> [1] 2
>>> 
>>> It seems that the main loop somehow stores the SIGINT it receives
>>> while it is waiting on stdin, and then it triggers it when some input
>>> comes in Maybe. Just speculating
>>> 
>>> Thanks,
>>> Gabor
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [External] Re: Background R session on Unix and SIGINT

2019-05-01 Thread Simon Urbanek
Gabor,

I think you're talking about two independent things. You can interrupt the 
computation, no question about that. It's just that if you send an interrupt 
while you're *not* doing any computations, it will be signaled but not raised 
until the interrupts are checked since there is no one to check it. This goes 
back to my original response - the interactive REPL calls 
R_CheckUserInterrupt(), but the straight stdin-prcessing doesn't (since it's 
expected to be a script, not interactive prompt). If you just want to clear 
interrupts before next processing you can either just run 
R_CheckUserInterrupt() explicitly, or on R side do anything that does that, 
e.g. to take your example "tryCatch(Sys.sleep(0), interrupt = function(e) e)" 
will clear it.

Cheers,
Simon


> On Apr 30, 2019, at 7:03 PM, Gábor Csárdi  wrote:
> 
> Unfortunately --interactive also makes the session interactive(),
> which is bad for me, as it is a background session.
> 
> In general, I don't want the interactive behavior, but was wondering
> if I could send as SIGINT to try to interrupt the computation of the
> background process, and if that does not work, then I would send a
> SIGKILL and start up another process. It all works nicely, except for
> this glitch, but I think I can work around it.
> 
> Thanks,
> Gabor
> 
> On Tue, Apr 30, 2019 at 10:55 PM Tierney, Luke  wrote:
>> 
>> A Simon pointed out the interrupt is recorded but not processed until
>> a safe point.
>> 
>> When reading from a fifo or pipe R runs non-interactive, which means
>> is sits in a read() system call and the interrupt isn't seen until
>> sometime during evaluation when a safe checkpoint is reached.
>> 
>> When reading from a terminal R will use select() to wait for input and
>> periodically wake and check for interrupts. In that case the interrupt
>> will probably be seen sooner.
>> 
>> If the interactive behavior is what you want you can add --interactive
>> to the arguments used to start R.
>> 
>> Best,
>> 
>> luke
>> 
>> On Tue, 30 Apr 2019, Gábor Csárdi wrote:
>> 
>>> OK, I managed to create an example without callr, but it is still
>>> somewhat cumbersome. Anyway, here it is.
>>> 
>>> Terminal 1:
>>> mkfifo fif
>>> R --no-readline --slave --no-save --no-restore < fif
>>> 
>>> Terminal 2:
>>> cat > fif
>>> Sys.getpid()
>>> 
>>> This will make Terminal 1 print the pid of the R process, so we can
>>> send a SIGINT:
>>> 
>>> Terminal 3:
>>> kill -INT pid
>>> 
>>> The R process is of course still running happily.
>>> 
>>> Terminal 2 again:
>>> tryCatch(Sys.sleep(10), interrupt = function(e) e)
>>> 
>>> and then Terminal 1 prints the interrupt condition:
>>> 
>>> 
>>> This is macOS and 3.5.3, although I don't think it matters much.
>>> 
>>> Thanks much!
>>> G.
>>> 
>>> On Tue, Apr 30, 2019 at 9:50 PM Simon Urbanek
>>>  wrote:
>>>> 
>>>> Can you give an example without callr? The key is how is the process 
>>>> stated and what it is doing which is entirely opaque in callr.
>>>> 
>>>> Windows doesn't have signals, so the process there is entirely different. 
>>>> Most of the WIN32 processing is event-based.
>>>> 
>>>> Cheers,
>>>> Simon
>>>> 
>>>> 
>>>>> On Apr 30, 2019, at 4:17 PM, Gábor Csárdi  wrote:
>>>>> 
>>>>> Yeah, I get that they are async.
>>>>> 
>>>>> What happens is that the background process is not doing anything when
>>>>> the process gets a SIGINT. I.e. the background process is just
>>>>> listening on its standard input.
>>>>> 
>>>>> AFAICT for an interactive process such a SIGINT is just swallowed,
>>>>> with a newline outputted to the terminal.
>>>>> 
>>>>> But apparently, for this background process, it is not swallowed, and
>>>>> it is triggered later. FWIW it does not happen on Windows, not very
>>>>> surprisingly.
>>>>> 
>>>>> Gabor
>>>>> 
>>>>> On Tue, Apr 30, 2019 at 9:13 PM Simon Urbanek
>>>>>  wrote:
>>>>>> 
>>>>>> Interrupts are not synchronous in R - the signal only flags the request 
>>>>>> for interruption. Nothing actually happens until R_CheckUserInt

Re: [Rd] Pcre install

2019-05-14 Thread Simon Urbanek
sudo apt-get install libpcre3-dev

please read the docs [R-admin: A.1 Essential programs and libraries] - you may 
be missing more dependencies


> On May 14, 2019, at 9:44 AM, yueli  wrote:
> 
> Hello,
> 
> 
> 
> I downloaded R-3.6.0.tar.gz from https://cran.r-project.org/src/base/R-3/.
> 
> 
> I tried to install R-3.6.0.tar.gz in Ubuntu system.
> 
> 
> Thanks in advance for any help!
> 
> 
> yue
> 
> 
> 
> 
> checking for pcre.h... yes
> checking pcre/pcre.h usability... no
> checking pcre/pcre.h presence... no
> checking for pcre/pcre.h... no
> checking if PCRE version >= 8.20, < 10.0 and has UTF-8 support... no
> checking whether PCRE support suffices... configure: error: pcre >= 8.20 
> library and headers are required
> 
> 
> 
> 
> 
> 
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] most robust way to call R API functions from a secondary thread

2019-05-20 Thread Simon Urbanek
Stepan,

Andreas gave a lot more thought into what you question in your reply. His 
question was how you can avoid what you where proposing and have proper 
threading under safe conditions. Having dealt with this before, I think 
Andreas' write up is pretty much the most complete analysis I have seen. I'd 
wait for Luke to chime in as the ultimate authority if he gets to it.

The "classic" approach which you mention is to collect and allocate everything, 
then execute parallel code and then return. What Andres is proposing is 
obviously much more efficient: you only synchronize on R API calls which are 
likely a small fraction on the entire time while you keep all threads alive. 
His question was how to do that safely. (BTW: I really like the touch of 
counting frames that toplevel exec can use ;) - it may make sense to deal with 
that edge-case in R if we can ...).

Cheers,
Simon




> On May 20, 2019, at 5:45 AM, Stepan  wrote:
> 
> Hi Andreas,
> 
> note that with the introduction of ALTREP, as far as I understand, calls as 
> "simple" as DATAPTR can execute arbitrary code (R or native). Even without 
> ALTREP, if you execute user-provided R code via Rf_eval and such on some 
> custom thread, you may end up executing native code of some package, which 
> may assume it is executed only from the R main thread.
> 
> Could you (1) decompose your problem in a way that in some initial phase you 
> pull all the necessary data from R, then start the parallel computation, and 
> then again in the R main thread "submit" the results back to the R world?
> 
> If you wanted something really robust, you can (2) "send" the requests for R 
> API usage to the R main thread and pause the worker thread until it receives 
> the results back. This looks similar to what the "later" package does. Maybe 
> you can even use that package for your purposes?
> 
> Do you want to parallelize your code to achieve better performance? Even with 
> your proposed solution, you need synchronization and chances are that 
> excessive synchronization will severely affect the expected performance 
> benefits of parallelization. If you do not need to synchronize that much, 
> then the question is if you can do with (1) or (2).
> 
> Best regards,
> Stepan
> 
> On 19/05/2019 11:31, Andreas Kersting wrote:
>> Hi,
>> As the subject suggests, I am looking for the most robust way to call an 
>> (arbitrary) function from the R API from another but the main POSIX thread 
>> in a package's code.
>> I know that, "[c]alling any of the R API from threaded code is ‘for experts 
>> only’ and strongly discouraged. Many functions in the R API modify internal 
>> R data structures and might corrupt these data structures if called 
>> simultaneously from multiple threads. Most R API functions can signal 
>> errors, which must only happen on the R main thread." 
>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__cran.r-2Dproject.org_doc_manuals_r-2Drelease_R-2Dexts.html-23OpenMP-2Dsupport&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=neKFCw86thQe2E2-61NAgpDMw4cC7oD_tUTTzraOkQM&m=d1r2raD4w0FF7spOVuz2IVEo0P_II3ZtSbw0TU2NmaE&s=JaadZR_m-QiJ3BQzzQ_fJPYt034tM5Ts6vKhdi6f__A&e=)
>> Let me start with my understanding of the related issues and possible 
>> solutions:
>> 1) R API functions are generally not thread-safe and hence one must ensure, 
>> e.g. by using mutexes, that no two threads use the R API simultaneously
>> 2) R uses longjmps on error and interrupts as well as for condition handling 
>> and it is undefined behaviour to do a longjmp from one thread to another; 
>> interrupts can be suspended before creating the threads by setting 
>> R_interrupts_suspended = TRUE; by wrapping the calls to functions from the R 
>> API with R_ToplevelExec(), longjmps across thread boundaries can be avoided; 
>> the only reason for R_ToplevelExec() itself to fail with an R-style error 
>> (longjmp) is a pointer protection stack overflow
>> 3) R_CheckStack() might be executed (indirectly), which will (probably) 
>> signal a stack overflow because it only works correctly when called form the 
>> main thread (see 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__cran.r-2Dproject.org_doc_manuals_r-2Drelease_R-2Dexts.html-23Threading-2Dissues&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=neKFCw86thQe2E2-61NAgpDMw4cC7oD_tUTTzraOkQM&m=d1r2raD4w0FF7spOVuz2IVEo0P_II3ZtSbw0TU2NmaE&s=J_TMw2gu43dxB_EX2vHbtF4Zr4bIAFR8RSFzvbRV6jE&e=);
>>  in particular, any function that does allocations, e.g. via allocVector3() 
>> might end up calling it via GC -> finalizer -> ... -> eval; the only way 
>> around this problem which I could find is to adjust R_CStackLimit, which is 
>> outside of the official API; it can be set to -1 to disable the check or be 
>> changed to a value appropriate for the current thread
>> 4) R sets signal handlers for several signals and some of them make use of 
>> the R API; hence, issues 1) - 3) apply; signal masks can be used to block 
>>

Re: [Rd] Race condition on parallel package's mcexit and rmChild

2019-05-20 Thread Simon Urbanek
Because that's the communication protocol between the parent and child. There 
is a difference between unsolicited exit and empty result exit.

Cheers,
Simon


> On May 20, 2019, at 11:22 AM, Sun Yijiang  wrote:
> 
> Have read the latest code, but I still don't understand why mc_exit
> needs to write zero on exit.  If a child closes its pipe, parent will
> know that on next select.
> 
> Best,
> Yijiang
> 
> Tomas Kalibera  于2019年5月20日周一 下午10:52写道:
>> 
>> This issue has already been addressed in 76462 (R-devel) and also ported
>> to R-patched. In fact rmChild() is used in mccollect(wait=FALSE).
>> 
>> Best
>> Tomas
>> 
>> On 5/19/19 11:39 AM, Sun Yijiang wrote:
>>> I've been hacking with parallel package for some time and built a
>>> parallel processing framework with it.  However, although very rarely,
>>> I did notice "ignoring SIGPIPE signal" error every now and then.
>>> After a deep dig into the source code, I think I found something worth
>>> noticing.
>>> 
>>> In short, wring to pipe in the C function mc_exit(SEXP sRes) may cause
>>> a SIGPIPE.  Code from src/library/parallel/src/fork.c:
>>> 
>>> SEXP NORET mc_exit(SEXP sRes)
>>> {
>>> int res = asInteger(sRes);
>>> ... ...
>>> if (master_fd != -1) { /* send 0 to signify that we're leaving */
>>> size_t len = 0;
>>> /* assign result for Fedora security settings */
>>> ssize_t n = write(master_fd, &len, sizeof(len));
>>> ... ...
>>> }
>>> 
>>> So a pipe write is made in mc_exit, and here's how this function is
>>> used in src/library/parallel/R/unix/mcfork.R:
>>> 
>>> mcexit <- function(exit.code = 0L, send = NULL)
>>> {
>>> if (!is.null(send)) try(sendMaster(send), silent = TRUE)
>>> .Call(C_mc_exit, as.integer(exit.code))
>>> }
>>> 
>>> Between sendMaster() and mc_exit() calls, which are made in the child
>>> process, the master process may call readChild() followed by
>>> rmChild().  rmChild closes the pipe on the master side, and if it's
>>> called before child calls mc_exit, a SIGPIPE will be raised when child
>>> tries to write to the pipe in mc_exit.
>>> 
>>> rmChild is defined but not used in parallel package, so this problem
>>> won't surface in most cases.  However, it is a useful API and may be
>>> used by users like me for advanced control over child processes.  I
>>> hope we can discuss a solution on it.
>>> 
>>> In fact, I don't see why we need to write to the pipe on child exit
>>> and how it has anything to do with "Fedora security settings" as
>>> suggested in the comments.  Removing it, IMHO, would be a good and
>>> clean way to solve this problem.
>>> 
>>> Regards,
>>> Yijiang
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
>> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] HTTPS warning on developer.r-project.org

2019-05-22 Thread Simon Urbanek
More to the point: the custom search function is currently broken anyway - it 
just gives me 404.

Should we just get rid of it? If people want to use Google they can just say

site:developer.r-project.org foo





> On May 22, 2019, at 1:08 AM, Paul Menzel  wrote:
> 
> [Please CC me on replies, as I am not subscribed.]
> 
> Dear R folks,
> 
> 
> Accessing the *R Developer Page* [1], the browser (Firefox) shows an HTTPS 
> warning.
> 
> The reason is the embedded Google logo.
> 
>> Gemischte (unsichere) Anzeige-Inhalte von 
>> "http://www.google.com/logos/Logo_40wht.gif"; werden auf einer sicheren Seite 
>> geladen
> Could you change that to an HTTPS link please?
> 
> ```
> $ curl -I https://www.google.com/logos/Logo_40wht.gif
> HTTP/2 200
> accept-ranges: bytes
> content-type: image/gif
> content-length: 3845
> date: Wed, 22 May 2019 05:07:35 GMT
> expires: Wed, 22 May 2019 05:07:35 GMT
> cache-control: private, max-age=31536000
> last-modified: Thu, 08 Dec 2016 01:00:57 GMT
> x-content-type-options: nosniff
> server: sffe
> x-xss-protection: 0
> alt-svc: quic=":443"; ma=2592000; v="46,44,43,39"
> 
> ```
> 
> 
> Kind regards,
> 
> Paul
> 
> 
> [1]: https://developer.r-project.org/
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Possible bug when finding shared libraries during staged installation

2019-05-24 Thread Simon Urbanek
I'll have a look at the code - I don't think I actually reviewed all those 
macOS modifications - I wasn't even aware that they were added to the code.


> On May 24, 2019, at 08:52, Martin Maechler  wrote:
> 
>> Kara Woo 
>>on Thu, 23 May 2019 14:24:26 -0700 writes:
> 
>> Hi all,
>> With the new staged installation, it seems that R CMD INSTALL sometimes
>> fails on macOS due to these lines [1] when sapply() returns a list. The
>> x13binary package has an example [2], reproducible with the following steps:
> 
>> $ git clone g...@github.com:x13org/x13binary.git && cd x13binary
>> $ git checkout 663ad7122
>> $ R CMD INSTALL .
> 
>> (We've also run into it in an internal package, but it's easier to
>> reproduce with x13binary)
> 
>> In this case the file command returns multiple results for one of the
>> dynamic libraries, so are_shared looks like this:
> 
>>> are_shared
>> $`/Users/Kara/projects/forks/x13binary/inst//lib/libgcc_s.1.dylib`
>> [1] TRUE TRUE TRUE
> 
>> $`/Users/Kara/projects/forks/x13binary/inst//lib/libgfortran.3.dylib`
>> [1] TRUE
> 
>> $`/Users/Kara/projects/forks/x13binary/inst//lib/libquadmath.0.dylib`
>> [1] TRUE
> 
> Thank you, Kara.
> 
> Just for curiosity, what does
> 
> file /Users/Kara/projects/forks/x13binary/inst//lib/libgcc_s.1.dylib
> 
> produce on your Mac?
> 
>> slibs[are_shared] then fails with invalid subscript type 'list'.
> 
> yes, "of course".
> 
>> I believe this may be a bug and I have included a patch that uses any() and
>> vapply() to ensure that only one value is returned for each library and the
>> result is an atomic vector. This is my first time submitting a bug report
>> or patch here; I'm happy to make any changes if needed.
> 
> Your patch was not attached with MIME type   text/plain  and so
> was filtered out by the mailing list software.
> OTOH, I could relatively easily guess how to fix the bug,
> notably when seeing the above "file ...dylib" result.
> 
> What we *meant* to say in  https://www.r-project.org/bugs.html 
> is that in such a situation
> 1) you send your finding / suspicion / diagnosis
>   to the R-devel mailing list,  in order to get confirmation etc
>   if what you see is a bug;
> 2) then ideally, you'd do a formal bug report at
>   https://bugs.r-project.org/
>   (for which you need to get an "account" there to be created
>once only by a bugzilla admin, typically an R core member).
> 
> In this case, that (2) may not be necessary, but you may want
> that anyway (and let some of us know).
> 
>> Thanks for considering,
>> Kara
> 
> Thank *you* indeed for the report,
> Martin
> 
>> [1]
>> https://github.com/wch/r-source/blob/3fe2bb01e9ec1b268803a437c308742775c2442d/src/library/tools/R/install.R#L594-L597
>> [2] https://github.com/x13org/x13binary/issues/46
> 
>> R version 3.6.0 Patched (2019-05-22 r76579)
>> Platform: x86_64-apple-darwin15.6.0 (64-bit)
>> Running under: macOS Mojave 10.14.4
> 
> --
> Martin Maechler
> ETH Zurich  and  R Core Team
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] use of buffers in sprintf and snprintf

2019-05-30 Thread Simon Urbanek
No, that will make it even worse since you'll be declaring a lot more memory 
that you actually have.

The real problem is that you're ignoring the truncation, so you probably want 
to use something like

if (snprintf(tempname, sizeof(tempname), "%s.%d", of1name, j) >= 
sizeof(tempname)) Rf_error("file name is too long");

BTW: most OSes systems have a path limits that are no lower than 256 so you 
should allow at least as much.

Cheers,
Simon




> On May 29, 2019, at 11:49 AM, jing hua zhao  wrote:
> 
> Dear R-developers,
> 
> I am struggling with packaging with sprintf and snprintf() as the following 
> WARNINGS from gcc 9.x,
> 
>  hap_c.c:380:46: warning: �%d� directive output may be truncated writing 
> between 1 and 10 bytes into a region of size between 0 and 127 
> [-Wformat-truncation=]
>  hap_c.c:392:46: warning: �%d� directive output may be truncated writing 
> between 1 and 10 bytes into a region of size between 0 and 127 
> [-Wformat-truncation=]
> 
> Essentially, I have
> 
> #define MAX_FILENAME_LEN 128
> char of1name[MAX_FILENAME_LEN],of2name[MAX_FILENAME_LEN], 
> tempname[MAX_FILENAME_LEN];
> 
> ...
> 
> snprintf(tempname,sizeof(tempname),"%s.%d", of1name, j);
> 
> It looks I could get around with
> 
> 
> #define MAX_FILENAME_LEN 128
> 
> #define MAX_FILENAME_LEN2 256
> 
> char of1name[MAX_FILENAME_LEN],of2name[MAX_FILENAME_LEN], 
> tempname[MAX_FILENAME_LEN2];
> 
> ...
> snprintf(tempname,2*sizeof(tempname)+1,"%s.%d", of1name, j)
> 
> It looks a bit waste of resources to me.
> 
> 
> Any idea will be greatly appreciated,
> 
> 
> 
> Jing Hua
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Offer zip builds

2019-06-07 Thread Simon Urbanek
Just to add to that point - it is expected that the registry is appropriately 
updated so the correct version of R can be located. Just unpacking a ZIP won't 
work in general since tools using R have no reliable way to find it.

Cheers,
Simon


> On Jun 6, 2019, at 6:33 AM, Jeroen Ooms  wrote:
> 
> On Tue, Jun 4, 2019 at 5:40 PM Steven Penny  wrote:
>> 
>> Theres nothing nefarious here. It would allow people to use the R environment
>> without running an installer. If someone is a new user they may want to try
>> R out, and installers can be invasive as they commonly:
>> 
>> - copy files to install dir
>> - copy files to profile dir
>> - set registry entries
>> - set environment variables
>> - set start menu entries
>> 
>> and historically uninstallers have a bad record of reverting these changes.
>> should not put this burden upon new users or even having them resort to 
>> virtual
>> machine to avoid items above. having a ZIP file allows new users to run the
>> R environment, then if they like it perhaps they can run the installer going
>> forward.
> 
> This is a valid suggestion, but probably impossible to do reliably.
> Most installers (the R one is completely open source btw) perform
> those steps for a reason. It is great if software can be installed
> simply by extracting a zip file somewhere, but if this is what you
> desire, you're using the wrong operating system.
> 
> We only offer official installation options that work 100% reliably
> and I don't think this can be accomplished with a zip file. For
> example a zip file won't be able to set the installation location in
> the registry, and hence other software such as RStudio won't be able
> to find the R installation. Also a zip installation might mix up
> package libraries from different R versions (which is bad), or users
> might expect they can upgrade R by overwriting their installation with
> a new zip (also bad). Hence I'm afraid offering such alternative
> installation options would open a new can of worms with bug reports
> from Windows users with broken installations, or packages that don't
> work as expected.
> 
> As for alternatives, 'rportable' and 'innoextract' have already been
> mentioned if you really just want to dump the files from the
> installer, if that works for you. Another popular option to install
> (any) Windows software without manually running installers is using
> chocolatey, for example:
> 
>  choco install miktex
>  choco install r.project
> 
> This will still indirectly use official installers, but the installers
> have been verified as "safe" by external folks and the installation is
> completely automated. Perhaps that's another compromise you could live
> with.
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R C API resize matrix

2019-06-17 Thread Simon Urbanek
Matrix is just a vector with the dim attribute. Assuming it is not referenced 
by anyone, you can set any values to the dim attribute. As for the vector, you 
can use SET_LENGTH() to shorten it - but I'm not sure how official it is - it 
was originally designed to work, but there were abuses of TRUELENGTH so not 
sure where we stand now (shortened vectors used to fool the garbage collector 
as far as object sizes go). I wouldn't do it unless you're dealing with rally 
huge matrices.

Cheers,
Simon


> On Jun 14, 2019, at 5:31 PM, Morgan Morgan  wrote:
> 
> Hi,
> 
> Is there a way to resize a matrix defined as follows:
> 
> SEXP a = PROTECT(allocMatrix(INTSXP, 10, 2));
> int *pa  = INTEGER(a)
> 
> To row = 5 and col = 1 or do I have to allocate a second matrix "b" with
> pointer *pb and do a "for" loop to transfer the value of a to b?
> 
> Thank you
> Best regards
> Morgan
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] possible bug in R's configure check for C++11 features

2019-09-03 Thread Simon Urbanek
Kasper,

I haven’t checked in depth, so just to clarify: you *are* setting CXX11=g++ so 
it is doing what you asked it to. Since the settings are inherited upwards, 
this implies that you are setting both CXX14 and CXX17 to g++. So I’m not quite 
sure I understand your concern.

Cheers,
Simon



> On Sep 3, 2019, at 9:02 PM, Kasper Daniel Hansen 
>  wrote:
> 
> I am trying to compile R under a new setup, and frankly, I have had a lot
> of problems, but I think the stuff below points to a possible bug in R's
> (custom) configure checks for C++11/14/17, but not for C++98.
> 
> This is a report about R from the R-3-6 branch, with a svn checkout from
> today, revision r77135.
> 
> In my case the compiler name is x86_64-conda_cos6-linux-gnu-g++, not g++. I
> denote this in my configure call, using the CC variable. A snippet of the
> full configure is
> 
> ../${SRCDIR}/configure SHELL='/bin/bash' \
>   --prefix="${CONDA_PREFIX}/R/${R_VERSION}" \
>   CC="x86_64-conda_cos6-linux-gnu-gcc" \
>   CXX="x86_64-conda_cos6-linux-gnu-g++" \
>   F77="x86_64-conda_cos6-linux-gnu-gfortran" \
>   FC="$F77" \
>   CFLAGS="-Wall -mtune=amdfam10 -g -O2 -I${CONDA_PREFIX}/include"\
>   CXXFLAGS="-Wall -mtune=amdfam10 -g -O2 -I${CONDA_PREFIX}/include" \
>   F77FLAGS="-Wall -g -O2 -mtune=amdfam10 -I${CONDA_PREFIX}/include" \
>   CXX11="g++" \
>   CXX11STD="-std=c++11" \
>   CXX11FLAGS="-Wall -mtune=amdfam10 -g -O2 -I${CONDA_PREFIX}/include" \
>   CXX11PICFLAGS="-fPIC" \
> 
> Where $CONDA_PREFIX is given in my script.
> 
> The output in config.log is given below. Note that in the test for c++98,
> it uses the "right" CC, but in the test for c++11 it uses g++. This looks
> wrong to me:
> 
> configure:28111: checking whether x86_64-conda_cos6-linux-gnu-g++  supports
> C++98 features with -std=gnu++98
> configure:28130: x86_64-conda_cos6-linux-gnu-g++  -std=gnu++98 -c -Wall
> -mtune=amdfam10 -g -O2
> -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
> -fpic -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 conftest.cp
> p >&5
> configure:28130: $? = 0
> configure:28139: result: yes
> configure:28315: checking whether g++ -std=c++11 supports C++11 features
> configure:28607: g++ -std=c++11 -c -Wall -mtune=amdfam10 -g -O2
> -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
> -fPIC -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 conftest.cpp >&5
> ../R-3.6-src/configure: line 2355: g++: command not found
> configure:28607: $? = 127
> configure: failed program was:
> 
> I have similar issues (wrong CC using when compiling the test program) with
> the test for c++14, whereas the test for c++17 has empty space where the CC
> variable should be?
> 
> I can fix this issue by adding a soft link in my PATH from g++ to my
> compiler of choice. In this case configure finishes and reports that I have
> full C++17 capabilities. Weirdly, in the output, note that the C++ compiler
> is "wrong" again, despite my configure call:
> 
>  Source directory:../R-3.6-src
>  Installation directory:
> /jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/R/3.6
> 
>  C compiler:  x86_64-conda_cos6-linux-gnu-gcc  -Wall
> -mtune=amdfam10 -g -O2
> -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  Fortran fixed-form compiler:
> /jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/bin/x86_64-conda_cos6-linux-gnu-gfortran
> -fno-optimize-sibling-calls -fopenmp -march=nocona -mtune=haswell
> -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2
> -ffunction-sections -pipe
> 
>  Default C++ compiler:g++ -std=c++11   -Wall -mtune=amdfam10 -g
> -O2 -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  C++98 compiler:  x86_64-conda_cos6-linux-gnu-g++ -std=gnu++98
> -Wall -mtune=amdfam10 -g -O2
> -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  C++11 compiler:  g++ -std=c++11   -Wall -mtune=amdfam10 -g
> -O2 -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  C++14 compiler:  g++ -std=gnu++14   -Wall -mtune=amdfam10 -g
> -O2 -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  C++17 compiler:  g++ -std=gnu++17  -Wall -mtune=amdfam10 -g
> -O2 -I/jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/include
>  Fortran free-form compiler:
> /jhpce/shared/jhpce/core/conda/miniconda3-4.6.14/envs/svnR-3.6/bin/x86_64-conda_cos6-linux-gnu-gfortran
> -fno-optimize-sibling-calls
>  Obj-C compiler:
> 
> 
> 
> -- 
> Best,
> Kasper
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [External] R C api for 'inherits' S3 and S4 objects

2019-11-01 Thread Simon Urbanek
Note that your desire is by definition impossible - as your example also shows 
checking for S4 inheritance involves evaluation and thus allocation which 
cannot be avoided by the dynamic design of S4 inheritance.

Cheers,
Simon


> On Nov 1, 2019, at 9:23 AM, Jan Gorecki  wrote:
> 
> Thank you Luke.
> That is why I don't use Rf_inherits but INHERITS which does not
> allocate, provided in the email body.
> I cannot do similarly for S4 classes, thus asking for some API for that.
> 
> On Fri, Nov 1, 2019 at 5:56 PM Tierney, Luke  wrote:
>> 
>> On Fri, 1 Nov 2019, Jan Gorecki wrote:
>> 
>>> Dear R developers,
>>> 
>>> Motivated by discussion about checking inheritance of S3 and S4
>>> objects (in head matrix/array topic) I would light to shed some light
>>> on a minor gap about that matter in R C API.
>>> Currently we are able to check inheritance for S3 class objects from C
>>> in a robust way (no allocation, thread safe). This is unfortunately
>> 
>> Your premise is not correct. Rf_inherits will not GC but it can
>> allocate and is not thread safe.
>> 
>> Best,
>> 
>> luke
>> 
>>> not possible for S4 classes. I would kindly request new function in R
>>> C api so it can be achieved for S4 classes with no risk of allocation.
>>> For reference mentioned functions below. Thank you.
>>> Jan Gorecki
>>> 
>>> // S3 inheritance
>>> bool INHERITS(SEXP x, SEXP char_) {
>>> SEXP klass;
>>> if (isString(klass = getAttrib(x, R_ClassSymbol))) {
>>>   for (int i=0; i>> if (STRING_ELT(klass, i) == char_) return true;
>>>   }
>>> }
>>> return false;
>>> }
>>> // S4 inheritance
>>> bool Rinherits(SEXP x, SEXP char_) {
>>> SEXP vec = PROTECT(ScalarString(char_));
>>> SEXP call = PROTECT(lang3(sym_inherits, x, vec));
>>> bool ans = LOGICAL(eval(call, R_GlobalEnv))[0]==1;
>>> UNPROTECT(2);
>>> return ans;
>>> }
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
>> 
>> --
>> Luke Tierney
>> Ralph E. Wareham Professor of Mathematical Sciences
>> University of Iowa  Phone: 319-335-3386
>> Department of Statistics andFax:   319-335-3017
>>Actuarial Science
>> 241 Schaeffer Hall  email:   luke-tier...@uiowa.edu
>> Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] running R with users home dirs on a shared filesystems

2019-12-13 Thread Simon Urbanek
User home is not used by R directly, so it is really up to whatever 
package/code may be using user home. In our setup we have all machines using 
NFS mounted homes for years. From experience the only thing to watch for are 
packages that use their own cache directories in $HOME instead of tempdir() - 
it is technically against CRAN policies but we have seen it in the wild.

Cheers,
Simon



> On Dec 13, 2019, at 1:36 PM, lejeczek via R-devel  
> wrote:
> 
> Hi guys,
> 
> I want to ask devel for who knows better - having multiple
> nodes serving users home dirs off the same shared network
> filesystem : are there any precautions or must-dos &
> must-donts in order to assure healthy and efficient parallel
> Rs running simultaneously - and I don't mean obvious stuff,
> I'm rather asking about R's internals & environment.
> 
> simple example: three nodes mount a NFS share and users on
> all three nodes run R simultaneously.
> 
> many thanks, L.
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Inconsistent behavior for the C AP's R_ParseVector() ?

2019-12-14 Thread Simon Urbanek
Laurent,

the main point here is that ParseVector() just like any other R API has to be 
called in a correct context since it can raise errors so the issue was that 
your C code has a bug of not setting R correctly (my guess would be your'e not 
creating the initial context necessary in embedded R). There are many different 
errors, your is just one of many that can occur - any R API call that does 
allocation (and parsing obviously does) can cause errors. Note that this is 
true for pretty much all R API functions.

Cheers,
Simon



> On Dec 14, 2019, at 11:25 AM, Laurent Gautier  wrote:
> 
> Le lun. 9 déc. 2019 à 09:57, Tomas Kalibera  a
> écrit :
> 
>> On 12/9/19 2:54 PM, Laurent Gautier wrote:
>> 
>> 
>> 
>> Le lun. 9 déc. 2019 à 05:43, Tomas Kalibera  a
>> écrit :
>> 
>>> On 12/7/19 10:32 PM, Laurent Gautier wrote:
>>> 
>>> Thanks for the quick response Tomas.
>>> 
>>> The same error is indeed happening when trying to have a zero-length
>>> variable name in an environment. The surprising bit is then "why is this
>>> happening during parsing" (that is why are variables assigned to an
>>> environment) ?
>>> 
>>> The emitted R error (in the R console) is not a parse (syntax) error, but
>>> an error emitted during parsing when the parser tries to intern a name -
>>> look it up in a symbol table. Empty string is not allowed as a symbol name,
>>> and hence the error. In the call "list(''=1)" , the empty name is what
>>> could eventually become a name of a local variable inside list(), even
>>> though not yet during parsing.
>>> 
>> 
>> Thanks Tomas.
>> 
>> I guess this has do with R expressions being lazily evaluated, and names
>> of arguments in a call are also part of the expression. Now the puzzling
>> part is why is that at all part of the parsing: I would have expected
>> R_ParseVector() to be restricted to parsing... Now it feels like
>> R_ParseVector() is performing parsing, and a first level of evalution for
>> expressions that "should never work" (the empty name).
>> 
>> Think of it as an exception in say Python. Some failures during parsing
>> result in an exception (called error in R and implemented using a long
>> jump). Any time you are calling into R you can get an error; out of memory
>> is also signalled as R error.
>> 
> 
> 
> The surprising bit for me was that I had expected the function to solely
> perform parsing. I did expect an exception (and a jmp smashing the stack)
> when the function concerned is in the C-API, is parsing a string, and is
> using a parameter (pointer) to store whether parsing was a failure or a
> success.
> 
> Since you are making a comparison with Python, the distinction I am making
> between parsing and evaluation seem to apply there. For example:
> 
> ```
 import parser
 parser.expr('1+')
>  Traceback (most recent call last):
>  File "", line 1, in 
>  File "", line 1
>1+
> ^
> SyntaxError: unexpected EOF while parsing
 p = parser.expr('list(""=1)')
 p
> 
 eval(p)
> Traceback (most recent call last):
>  File "", line 1, in 
> TypeError: eval() arg 1 must be a string, bytes or code object
> 
 list(""=1)
>  File "", line 1
> SyntaxError: keyword can't be an expression
> ```
> 
> 
>> There is probably some error in how the external code is handling R
>>> errors  (Fatal error: unable to initialize the JIT, stack smashing, etc)
>>> and possibly also how R is initialized before calling ParseVector. Probably
>>> you would get the same problem when running say "stop('myerror')". Please
>>> note R errors are implemented as long-jumps, so care has to be taken when
>>> calling into R, Writing R Extensions has more details (and section 8
>>> specifically about embedding R). This is unlike parse (syntax) errors
>>> signaled via return value to ParseVector()
>>> 
>> 
>> The issue is that the segfault (because of stack smashing, therefore
>> because of what also suspected to be an incontrolled jump) is happening
>> within the execution of R_ParseVector(). I would think that an issue with
>> the initialization of R is less likely because the project is otherwise
>> used a fair bit and is well covered by automated continuous tests.
>> 
>> After looking more into R's gram.c I suspect that an execution context is
>> required for R_ParseVector() to know to properly work (know where to jump
>> in case of error) when the parsing code decides to fail outside what it
>> thinks is a syntax error. If the case, this would make R_ParseVector()
>> function well when called from say, a C-extension to an R package, but fail
>> the way I am seeing it fail when called from an embedded R.
>> 
>> Yes, contexts are used internally to handle errors. For external use
>> please see Writing R Extensions, section 6.12.
>> 
> 
> I have wrapped my call to R_ParseVector() in a R_tryCatchError(), and this
> is seems to help me overcome the issue. Thanks for the pointer.
> 
> Best,
> 
> 
> Laurent
> 
> 
>> Best
>> Tomas
>> 
>> 
>> Best,
>> 
>> Laurent
>> 
>>> Best,
>>> Tomas
>

Re: [Rd] Inconsistent behavior for the C AP's R_ParseVector() ?

2019-12-14 Thread Simon Urbanek
Laurent,


> On Dec 14, 2019, at 5:29 PM, Laurent Gautier  wrote:
> 
> Hi Simon,
> 
> Widespread errors would have caught my earlier as the way that code is
> using only one initialization of the embedded R, is used quite a bit, and
> is covered by quite a few unit tests. This is the only situation I am aware
> of in which an error occurs.
> 

It may or may not be "widespread" - almost all R API functions can raise errors 
(e.g., unable to allocate). You'll only find out once they do and that's too 
late ;).


> What is a "correct context", or initial context, the code should from ?
> Searching for "context" in the R-exts manual does not return much.
> 

It depends which embedded API use - see R-ext 8.1 the two options are 
run_Rmainloop() and R_ReplDLLinit() which both setup the top-level context with 
SETJMP. If you don't use either then you have to use one of the advanced R APIs 
that do it such as R_ToplevelExec() or R_UnwindProtect(), otherwise your point 
to abort to on error doesn't exist. Embedding R is much more complex than many 
think ...

Cheers,
Simon



> Best,
> 
> Laurent
> 
> 
> Le sam. 14 déc. 2019 à 12:20, Simon Urbanek  a
> écrit :
> 
>> Laurent,
>> 
>> the main point here is that ParseVector() just like any other R API has to
>> be called in a correct context since it can raise errors so the issue was
>> that your C code has a bug of not setting R correctly (my guess would be
>> your'e not creating the initial context necessary in embedded R). There are
>> many different errors, your is just one of many that can occur - any R API
>> call that does allocation (and parsing obviously does) can cause errors.
>> Note that this is true for pretty much all R API functions.
>> 
>> Cheers,
>> Simon
>> 
>> 
>> 
>>> On Dec 14, 2019, at 11:25 AM, Laurent Gautier 
>> wrote:
>>> 
>>> Le lun. 9 déc. 2019 à 09:57, Tomas Kalibera  a
>>> écrit :
>>> 
>>>> On 12/9/19 2:54 PM, Laurent Gautier wrote:
>>>> 
>>>> 
>>>> 
>>>> Le lun. 9 déc. 2019 à 05:43, Tomas Kalibera 
>> a
>>>> écrit :
>>>> 
>>>>> On 12/7/19 10:32 PM, Laurent Gautier wrote:
>>>>> 
>>>>> Thanks for the quick response Tomas.
>>>>> 
>>>>> The same error is indeed happening when trying to have a zero-length
>>>>> variable name in an environment. The surprising bit is then "why is
>> this
>>>>> happening during parsing" (that is why are variables assigned to an
>>>>> environment) ?
>>>>> 
>>>>> The emitted R error (in the R console) is not a parse (syntax) error,
>> but
>>>>> an error emitted during parsing when the parser tries to intern a name
>> -
>>>>> look it up in a symbol table. Empty string is not allowed as a symbol
>> name,
>>>>> and hence the error. In the call "list(''=1)" , the empty name is what
>>>>> could eventually become a name of a local variable inside list(), even
>>>>> though not yet during parsing.
>>>>> 
>>>> 
>>>> Thanks Tomas.
>>>> 
>>>> I guess this has do with R expressions being lazily evaluated, and names
>>>> of arguments in a call are also part of the expression. Now the puzzling
>>>> part is why is that at all part of the parsing: I would have expected
>>>> R_ParseVector() to be restricted to parsing... Now it feels like
>>>> R_ParseVector() is performing parsing, and a first level of evalution
>> for
>>>> expressions that "should never work" (the empty name).
>>>> 
>>>> Think of it as an exception in say Python. Some failures during parsing
>>>> result in an exception (called error in R and implemented using a long
>>>> jump). Any time you are calling into R you can get an error; out of
>> memory
>>>> is also signalled as R error.
>>>> 
>>> 
>>> 
>>> The surprising bit for me was that I had expected the function to solely
>>> perform parsing. I did expect an exception (and a jmp smashing the stack)
>>> when the function concerned is in the C-API, is parsing a string, and is
>>> using a parameter (pointer) to store whether parsing was a failure or a
>>> success.
>>> 
>>> Since you are making a comparison with Python, the distinction I am
>> making
>>> between parsing and evaluation seem t

Re: [Rd] Downsized R configuration for flat deployment

2019-12-22 Thread Simon Urbanek
DM,

can you clarify somewhat what you are exactly concerned about? The standard 
build of R on Linux is relocatable without any special flags needed since R 
doesn't use absolute paths but manages library path to its components itself so 
it's very flexible (e.g., I use this feature to run R jobs on Hadoop clusters 
that don't have R installed as the directory where the job components are 
unpacked is dynamic). You can always disable whatever features you don't need. 
In your question I don't understand what you mean by "flat" - on Linux all 
binaries are flat by default so please clarify what you mean. It's easy to 
create completely dynamic R deployment - and easy to install packages - both 
binary and from sources.

Cheers,
Simon


> On Dec 21, 2019, at 8:39 AM, dme...@gmail.com wrote:
> 
> Dear folks,
> 
> I'm testing a downsized R build - in features and obviously sizes -
> for a "modern" flat deployment (eg. like python virtualenv, just to
> name one).
> 
> Questions:
> 
> 1) Is flat style possible?
> 2) With this setup, R and packages can be installed/updated?
> 3) The directory can be easy renamed or moved?
> 
> ---
> 
> I didn't find any official git repo, so I downloaded latest stable
> R release (3.6.2), exploring different setups of configure options. 
> 
> # tested in debian 10
> # $ sudo apt-get build-dep r-base
> 
> $ mkdir ~/r-virtual
> $ cd R-3.6.2/
> $ ./configure --prefix=/home/dmedri/r-virtual/ \
>--exec-prefix=/home/dmedri/r-virtual/ \
>--without-recommended-packages \
>--without-cairo \
>--without-libtiff \
>--without-jpeglib \
>--without-x \
>--without-aqua \
>--without-tcltk \
>--disable-rpath
> 
> $ make && make install
> 
> This minimal R could run inside a container, no desktop just cli,
> with just one graphical formats in output (png). Java support was
> leaved in. I didn't find options to disable html docs. Recommended
> packages were leaved out, for a 2nd stage.
> 
> $ cd r-virtual/
> $ ./bin/R
> 
> The environment was there, "yes" to question 1).
> 
> I can install/update packages, obviously with required includes
> installed in the system, not fetched online -- a limit, looking at
> python pip, partially by design/choice. BTW answer to question 2)
> is "yes" for packages, "no" for the core.
> 
> Without the "--disable-rpath" option, abs paths break everything
> with the easy case of "renaming r-virtual directory". IMHO this option
> should be a general default, and another "yes" to question 3).
> 
> While I'm still investigating, tips are welcome... 
> 
> HTH
> 
> Best Regards,
> 
> DM
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2020-01-10 Thread Simon Urbanek
If I understand the thread correctly this is an RStudio issue and I would 
suggest that the developers consider using pthread_atfork() so RStudio can 
handle forking as they deem fit (bail out with an error or make RStudio work).  
Note that in principle the functionality requested here can be easily 
implemented in a package so R doesn’t need to be modified.

Cheers,
Simon

Sent from my iPhone

>> On Jan 10, 2020, at 04:34, Tomas Kalibera  wrote:
>> 
>> On 1/10/20 7:33 AM, Henrik Bengtsson wrote:
>> I'd like to pick up this thread started on 2019-04-11
>> (https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html).
>> Modulo all the other suggestions in this thread, would my proposal of
>> being able to disable forked processing via an option or an
>> environment variable make sense?
> 
> I don't think R should be doing that. There are caveats with using fork, and 
> they are mentioned in the documentation of the parallel package, so people 
> can easily avoid functions that use it, and this all has been discussed here 
> recently.
> 
> If it is the case, we can expand the documentation in parallel package, add a 
> warning against the use of forking with RStudio, but for that I it would be 
> good to know at least why it is not working. From the github issue I have the 
> impression that it is not really known why, whether it could be fixed, and if 
> so, where. The same github issue reflects also that some people want to use 
> forking for performance reasons, and even with RStudio, at least on Linux. 
> Perhaps it could be fixed? Perhaps it is just some race condition somewhere?
> 
> Tomas
> 
>> I've prototyped a working patch that
>> works like:
>>> options(fork.allowed = FALSE)
>>> unlist(parallel::mclapply(1:2, FUN = function(x) Sys.getpid()))
>> [1] 14058 14058
>>> parallel::mcmapply(1:2, FUN = function(x) Sys.getpid())
>> [1] 14058 14058
>>> parallel::pvec(1:2, FUN = function(x) Sys.getpid() + x/10)
>> [1] 14058.1 14058.2
>>> f <- parallel::mcparallel(Sys.getpid())
>> Error in allowFork(assert = TRUE) :
>>  Forked processing is not allowed per option ‘fork.allowed’ or
>> environment variable ‘R_FORK_ALLOWED’
>>> cl <- parallel::makeForkCluster(1L)
>> Error in allowFork(assert = TRUE) :
>>  Forked processing is not allowed per option ‘fork.allowed’ or
>> environment variable ‘R_FORK_ALLOWED’
>> The patch is:
>> Index: src/library/parallel/R/unix/forkCluster.R
>> ===
>> --- src/library/parallel/R/unix/forkCluster.R (revision 77648)
>> +++ src/library/parallel/R/unix/forkCluster.R (working copy)
>> @@ -30,6 +30,7 @@
>> newForkNode <- function(..., options = defaultClusterOptions, rank)
>> {
>> +allowFork(assert = TRUE)
>> options <- addClusterOptions(options, list(...))
>> outfile <- getClusterOption("outfile", options)
>> port <- getClusterOption("port", options)
>> Index: src/library/parallel/R/unix/mclapply.R
>> ===
>> --- src/library/parallel/R/unix/mclapply.R (revision 77648)
>> +++ src/library/parallel/R/unix/mclapply.R (working copy)
>> @@ -28,7 +28,7 @@
>> stop("'mc.cores' must be >= 1")
>> .check_ncores(cores)
>> -if (isChild() && !isTRUE(mc.allow.recursive))
>> +if (!allowFork() || (isChild() && !isTRUE(mc.allow.recursive)))
>> return(lapply(X = X, FUN = FUN, ...))
>> ## Follow lapply
>> Index: src/library/parallel/R/unix/mcparallel.R
>> ===
>> --- src/library/parallel/R/unix/mcparallel.R (revision 77648)
>> +++ src/library/parallel/R/unix/mcparallel.R (working copy)
>> @@ -20,6 +20,7 @@
>> mcparallel <- function(expr, name, mc.set.seed = TRUE, silent =
>> FALSE, mc.affinity = NULL, mc.interactive = FALSE, detached = FALSE)
>> {
>> +allowFork(assert = TRUE)
>> f <- mcfork(detached)
>> env <- parent.frame()
>> if (isTRUE(mc.set.seed)) mc.advance.stream()
>> Index: src/library/parallel/R/unix/pvec.R
>> ===
>> --- src/library/parallel/R/unix/pvec.R (revision 77648)
>> +++ src/library/parallel/R/unix/pvec.R (working copy)
>> @@ -25,7 +25,7 @@
>> cores <- as.integer(mc.cores)
>> if(cores < 1L) stop("'mc.cores' must be >= 1")
>> -if(cores == 1L) return(FUN(v, ...))
>> +if(cores == 1L || !allowFork()) return(FUN(v, ...))
>> .check_ncores(cores)
>> if(mc.set.seed) mc.reset.stream()
>> with a new file src/library/parallel/R/unix/allowFork.R:
>> allowFork <- function(assert = FALSE) {
>>value <- Sys.getenv("R_FORK_ALLOWED")
>>if (nzchar(value)) {
>>value <- switch(value,
>>   "1"=, "TRUE"=, "true"=, "True"=, "yes"=, "Yes"= TRUE,
>>   "0"=, "FALSE"=,"false"=,"False"=, "no"=, "No" = FALSE,
>>stop(gettextf("invalid environment variable value: %s==%s",
>>   "R_FORK_ALLOWED", value)))
>> value <- as.logi

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2020-01-10 Thread Simon Urbanek
Henrik,

the example from the post works just fine in CRAN R for me - the post was about 
homebrew build so it's conceivably a bug in their libraries. That's exactly why 
I was proposing a more general solution where you can simply define a function 
in user-space that will issue a warning or stop on fork, it doesn't have to be 
part of core R, there are other packages that use fork() as well, so what I 
proposed is much safer than hacking the parallel package.

Cheers,
Simon
 


> On Jan 10, 2020, at 10:58 AM, Henrik Bengtsson  
> wrote:
> 
> The RStudio GUI was just one example.  AFAIK, and please correct me if
> I'm wrong, another example is where multi-threaded code is used in
> forked processing and that's sometimes unstable.  Yes another, which
> might be multi-thread related or not, is
> https://stat.ethz.ch/pipermail/r-devel/2018-September/076845.html:
> 
> res <- parallel::mclapply(urls, function(url) {
>  download.file(url, basename(url))
> })
> 
> That was reported to fail on macOS with the default method="libcurl"
> but not for method="curl" or method="wget".
> 
> Further documentation is needed and would help but I don't believe
> it's sufficient to solve everyday problems.  The argument for
> introducing an option/env var to disable forking is to give the end
> user a quick workaround for newly introduced bugs.  Neither the
> develop nor the end user have full control of the R package stack,
> which is always in flux.  For instance, above mclapply() code might
> have been in a package on CRAN and then all of a sudden
> method="libcurl" became the new default in base R.  The above
> mclapply() code is now buggy on macOS, and not necessarily caught by
> CRAN checks.  The package developer might not notice this because they
> are on Linux or Windows.  It can take a very long time before this
> problem is even noticed and even further before it is tracked down and
> fixed.   Similarly, as more and more code turn to native code and it
> becomes easier and easier to implement multi-threading, more and more
> of these bugs across package dependencies risk sneaking in the
> backdoor wherever forked processing is in place.
> 
> For the end user, but also higher-up upstream package developers, the
> quickest workaround would be disable forking.  If you're conservative,
> you could even disable it all of your R processing.  Being able to
> quickly disable forking will also provide a mechanism for quickly
> testing the hypothesis that forking is the underlying problem, i.e.
> "Please retry with options(fork.allowed = FALSE)" will become handy
> for troubleshooting.
> 
> /Henrik
> 
> On Fri, Jan 10, 2020 at 5:31 AM Simon Urbanek
>  wrote:
>> 
>> If I understand the thread correctly this is an RStudio issue and I would 
>> suggest that the developers consider using pthread_atfork() so RStudio can 
>> handle forking as they deem fit (bail out with an error or make RStudio 
>> work).  Note that in principle the functionality requested here can be 
>> easily implemented in a package so R doesn’t need to be modified.
>> 
>> Cheers,
>> Simon
>> 
>> Sent from my iPhone
>> 
>>>> On Jan 10, 2020, at 04:34, Tomas Kalibera  wrote:
>>>> 
>>>> On 1/10/20 7:33 AM, Henrik Bengtsson wrote:
>>>> I'd like to pick up this thread started on 2019-04-11
>>>> (https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html).
>>>> Modulo all the other suggestions in this thread, would my proposal of
>>>> being able to disable forked processing via an option or an
>>>> environment variable make sense?
>>> 
>>> I don't think R should be doing that. There are caveats with using fork, 
>>> and they are mentioned in the documentation of the parallel package, so 
>>> people can easily avoid functions that use it, and this all has been 
>>> discussed here recently.
>>> 
>>> If it is the case, we can expand the documentation in parallel package, add 
>>> a warning against the use of forking with RStudio, but for that I it would 
>>> be good to know at least why it is not working. From the github issue I 
>>> have the impression that it is not really known why, whether it could be 
>>> fixed, and if so, where. The same github issue reflects also that some 
>>> people want to use forking for performance reasons, and even with RStudio, 
>>> at least on Linux. Perhaps it could be fixed? Perhaps it is just some race 
>>> condition somewhere?
>>> 
>>> Tomas
>>> 
>>>> 

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2020-01-10 Thread Simon Urbanek
Henrik,

the whole point and only purpose of mc* functions is to fork. That's what the 
multicore package was about, so if you don't want to fork, don't use mc* 
functions - they don't have any other purpose. I really fail to see the point - 
if you use mc* functions you're very explicitly asking for forking - so your 
argument is like saying that print() should have an option to not print 
anything - it just makes no sense. If you have code that is fork-incompatilble, 
you clearly cannot use it in mcparallel - that's why there is a very explicit 
warning in the documentation. As I said, if you have some software that embeds 
R and has issue with forks, then that software should be use pthread_atfork() 
to control the behavior.

Cheers,
Simon



> On Jan 10, 2020, at 3:34 PM, Henrik Bengtsson  
> wrote:
> 
> On Fri, Jan 10, 2020 at 11:23 AM Simon Urbanek
>  wrote:
>> 
>> Henrik,
>> 
>> the example from the post works just fine in CRAN R for me - the post was 
>> about homebrew build so it's conceivably a bug in their libraries.
> 
> Thanks for ruling that example out.
> 
>> That's exactly why I was proposing a more general solution where you can 
>> simply define a function in user-space that will issue a warning or stop on 
>> fork, it doesn't have to be part of core R, there are other packages that 
>> use fork() as well, so what I proposed is much safer than hacking the 
>> parallel package.
> 
> I think this is worth pursuing and will help improve and stabilize
> things.  But issuing a warning or stop on fork will not allow end
> users from running the pipeline, or am I missing something?
> 
> I'm trying to argue that this is still a real problem that users and
> developers run into on a regular basis.  Since parallel::mclapply() is
> such a common and readily available solution it is also a low hanging
> fruit to make it possible to have those forking functions fall back to
> sequential processing.  The only(*) way to achieve this fall back
> right now is to run the same pipeline on MS Windows - I just think it
> would be very useful to have the same fallback option available on
> Unix and macOS.  Having this in base R could also serve as standard
> for other parallel/forking packages/implementations who also wish to
> have a fallback to sequential processing.
> 
> ==> What would the disadvantages be to provide a mechanism/setting for
> disabling forking in the parallel::mc*** API? <==
> 
> (*) One can also somewhat disable forking in 'parallel' by using
> 'cgroups' limiting the process to a single core (see also
> https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17641).  That will
> handle code that uses mc.cores = parallel::detectCores(), which there
> is a lot of.  I guess it will cause run-time error (on 'mc.cores' must
> be >= 1) for code that uses the second most common used mc.cores =
> parallel::detectCores() - 1, which is unfortunately also very common.
> I find the use of hardcoded detectCores() unfortunate but that is a
> slightly different topic.  OTH, if there would a standardized option
> in R for disabling all types of parallel processing by forcing a
> single core, one could imagine other parallel APIs to implement
> fallbacks to sequential processing as well. (I'm aware that not all
> use cases of async processing is about parallelization, so it might
> not apply everywhere).
> 
> Cheers,
> 
> Henrik
> 
>> 
>> Cheers,
>> Simon
>> 
>> 
>> 
>>> On Jan 10, 2020, at 10:58 AM, Henrik Bengtsson  
>>> wrote:
>>> 
>>> The RStudio GUI was just one example.  AFAIK, and please correct me if
>>> I'm wrong, another example is where multi-threaded code is used in
>>> forked processing and that's sometimes unstable.  Yes another, which
>>> might be multi-thread related or not, is
>>> https://stat.ethz.ch/pipermail/r-devel/2018-September/076845.html:
>>> 
>>> res <- parallel::mclapply(urls, function(url) {
>>> download.file(url, basename(url))
>>> })
>>> 
>>> That was reported to fail on macOS with the default method="libcurl"
>>> but not for method="curl" or method="wget".
>>> 
>>> Further documentation is needed and would help but I don't believe
>>> it's sufficient to solve everyday problems.  The argument for
>>> introducing an option/env var to disable forking is to give the end
>>> user a quick workaround for newly introduced bugs.  Neither the
>>> develop nor the end user have full control of the R package stack,
>>> whic

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2020-01-10 Thread Simon Urbanek



> On Jan 10, 2020, at 3:10 PM, Gábor Csárdi  wrote:
> 
> On Fri, Jan 10, 2020 at 7:23 PM Simon Urbanek
>  wrote:
>> 
>> Henrik,
>> 
>> the example from the post works just fine in CRAN R for me - the post was 
>> about homebrew build so it's conceivably a bug in their libraries.
> 
> I think it works now, because Apple switched to a different SSL
> library for libcurl. It usually crashes or fails on older macOS
> versions, with the CRAN build of R as well.
> 

That is not true - Apple has not changed the SSL back-end for many years. The 
issue in that post is presumably in the homebrew version of SSL.

Cheers,
Simon


> It is not a bug in any library, it is just that macOS does not support
> fork() without an immediate exec().
> 
> In general, any code that calls the macOS system libraries might
> crash. (Except for CoreFoundation, which seems to be fine, but AFAIR
> there is no guarantee for that, either.)
> 
> You get crashes in the terminal as well, without multithreading. E.g.
> the keyring package links for the Security library on macOS, so you
> get:
> 
> ❯ R --vanilla -q
>> .libPaths("~/R/3.6")
>> keyring::key_list()[1:2,]
>service  username
> 1CommCenter kEntitlementsUniqueIDCacheKey
> 2   ids   identity-rsa-public-key
>> parallel::mclapply(1:10, function(i) keyring::key_list()[1:2,])
> 
> *** caught segfault ***
> address 0x110, cause 'memory not mapped'
> 
> *** caught segfault ***
> address 0x110, cause 'memory not mapped'
> 
> AFAICT only Apple can do anything about this, and they won't.
> 
> Gabor
> 
>> That's exactly why I was proposing a more general solution where you can 
>> simply define a function in user-space that will issue a warning or stop on 
>> fork, it doesn't have to be part of core R, there are other packages that 
>> use fork() as well, so what I proposed is much safer than hacking the 
>> parallel package.
>> 
>> Cheers,
>> Simon
>> 
>> 
>> 
>>> On Jan 10, 2020, at 10:58 AM, Henrik Bengtsson  
>>> wrote:
>>> 
>>> The RStudio GUI was just one example.  AFAIK, and please correct me if
>>> I'm wrong, another example is where multi-threaded code is used in
>>> forked processing and that's sometimes unstable.  Yes another, which
>>> might be multi-thread related or not, is
>>> https://stat.ethz.ch/pipermail/r-devel/2018-September/076845.html:
>>> 
>>> res <- parallel::mclapply(urls, function(url) {
>>> download.file(url, basename(url))
>>> })
>>> 
>>> That was reported to fail on macOS with the default method="libcurl"
>>> but not for method="curl" or method="wget".
>>> 
>>> Further documentation is needed and would help but I don't believe
>>> it's sufficient to solve everyday problems.  The argument for
>>> introducing an option/env var to disable forking is to give the end
>>> user a quick workaround for newly introduced bugs.  Neither the
>>> develop nor the end user have full control of the R package stack,
>>> which is always in flux.  For instance, above mclapply() code might
>>> have been in a package on CRAN and then all of a sudden
>>> method="libcurl" became the new default in base R.  The above
>>> mclapply() code is now buggy on macOS, and not necessarily caught by
>>> CRAN checks.  The package developer might not notice this because they
>>> are on Linux or Windows.  It can take a very long time before this
>>> problem is even noticed and even further before it is tracked down and
>>> fixed.   Similarly, as more and more code turn to native code and it
>>> becomes easier and easier to implement multi-threading, more and more
>>> of these bugs across package dependencies risk sneaking in the
>>> backdoor wherever forked processing is in place.
>>> 
>>> For the end user, but also higher-up upstream package developers, the
>>> quickest workaround would be disable forking.  If you're conservative,
>>> you could even disable it all of your R processing.  Being able to
>>> quickly disable forking will also provide a mechanism for quickly
>>> testing the hypothesis that forking is the underlying problem, i.e.
>>> "Please retry with options(fork.allowed = FALSE)" will become handy
>>> for troubleshooting.
>>> 
>>> /Henrik
&

Re: [Rd] SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()

2020-01-10 Thread Simon Urbanek



> On Jan 10, 2020, at 3:10 PM, Gábor Csárdi  wrote:
> 
> On Fri, Jan 10, 2020 at 7:23 PM Simon Urbanek
>  wrote:
>> 
>> Henrik,
>> 
>> the example from the post works just fine in CRAN R for me - the post was 
>> about homebrew build so it's conceivably a bug in their libraries.
> 
> I think it works now, because Apple switched to a different SSL
> library for libcurl. It usually crashes or fails on older macOS
> versions, with the CRAN build of R as well.
> 
> It is not a bug in any library, it is just that macOS does not support
> fork() without an immediate exec().
> 
> In general, any code that calls the macOS system libraries might
> crash. (Except for CoreFoundation, which seems to be fine, but AFAIR
> there is no guarantee for that, either.)
> 

That is not true, either. macOS itself is fork-safe (it is POSIX-certified 
after all), but libraries may or may not. The rules are pretty clear - fork() 
shares open descriptors and only inherits the main thread (see the POSIX 
documentation for pthread_atfork() - it illustrates the issues nicely). So as a 
user of APIs it may be your responsibility to make sure things are handled 
properly - again, that's what pthread_atfork() is for. Most libraries don't 
allow duplicated fds or have rules about thread safety, so it is your 
responsibility in the package to abide by those rules if you want it to 
function after forking. Some libraries don't allow forking at all, e.g., JVMs 
cannot be forked (because they are too complex to make them fork-safe). In 
general, you cannot assume that (non-R) code is fork-safe unless it has been 
designed to be. That's why mcparallel() should only be used for pure R code 
(and even that is with I/O limitations) and C code that is explicitly 
fork-safe. As I said, using mc* functions explicitly says that you are ok with 
forking, so you if you run code that is not fork-safe it is clearly a user 
error.

That's exactly why we have the long "Warning" section in the documentation. If 
you have suggestions for its improvements, please feel free to supply patches.

Cheers,
Simon


> You get crashes in the terminal as well, without multithreading. E.g.
> the keyring package links for the Security library on macOS, so you
> get:
> 
> ❯ R --vanilla -q
>> .libPaths("~/R/3.6")
>> keyring::key_list()[1:2,]
>service  username
> 1CommCenter kEntitlementsUniqueIDCacheKey
> 2   ids   identity-rsa-public-key
>> parallel::mclapply(1:10, function(i) keyring::key_list()[1:2,])
> 
> *** caught segfault ***
> address 0x110, cause 'memory not mapped'
> 
> *** caught segfault ***
> address 0x110, cause 'memory not mapped'
> 
> AFAICT only Apple can do anything about this, and they won't.
> 
> Gabor
> 
>> That's exactly why I was proposing a more general solution where you can 
>> simply define a function in user-space that will issue a warning or stop on 
>> fork, it doesn't have to be part of core R, there are other packages that 
>> use fork() as well, so what I proposed is much safer than hacking the 
>> parallel package.
>> 
>> Cheers,
>> Simon
>> 
>> 
>> 
>>> On Jan 10, 2020, at 10:58 AM, Henrik Bengtsson  
>>> wrote:
>>> 
>>> The RStudio GUI was just one example.  AFAIK, and please correct me if
>>> I'm wrong, another example is where multi-threaded code is used in
>>> forked processing and that's sometimes unstable.  Yes another, which
>>> might be multi-thread related or not, is
>>> https://stat.ethz.ch/pipermail/r-devel/2018-September/076845.html:
>>> 
>>> res <- parallel::mclapply(urls, function(url) {
>>> download.file(url, basename(url))
>>> })
>>> 
>>> That was reported to fail on macOS with the default method="libcurl"
>>> but not for method="curl" or method="wget".
>>> 
>>> Further documentation is needed and would help but I don't believe
>>> it's sufficient to solve everyday problems.  The argument for
>>> introducing an option/env var to disable forking is to give the end
>>> user a quick workaround for newly introduced bugs.  Neither the
>>> develop nor the end user have full control of the R package stack,
>>> which is always in flux.  For instance, above mclapply() code might
>>> have been in a package on CRAN and then all of a sudden
>>> method="libcurl" became the new default in base R.  The above
>>> mclapply()

Re: [Rd] R CMD INSTALL cannot recognize full path on Windows

2020-03-11 Thread Simon Urbanek
Jiefei,

you did not commit all files into the example package - your example has things 
like RcppExports.cpp as well as additional flags which are not in your GH 
project. I suspect the issue is with the extra flags you're adding - those 
don't come from R. Please make sure you can replicate the issue with the GH 
package you created.

Cheers,
Simon 


* installing *source* package 'testPackage' ...
** using staged installation
** libs

*** arch - i386
echo "test1 is [1] 0.1522111 0.2533619 0.6591809"
test1 is [1] 0.1522111 0.2533619 0.6591809
echo "R_HOME is C:/R/R-3.6.2"
R_HOME is C:/R/R-3.6.2
echo "Fake library" > testPackage.dll
installing to C:/R/R-3.6.2/library/00LOCK-testPackage/00new/testPackage/libs/i38
6

*** arch - x64
echo "test1 is [1] 0.9271811 0.8040735 0.4739104"
test1 is [1] 0.9271811 0.8040735 0.4739104
echo "R_HOME is C:/R/R-3.6.2"
R_HOME is C:/R/R-3.6.2
echo "Fake library" > testPackage.dll
installing to C:/R/R-3.6.2/library/00LOCK-testPackage/00new/testPackage/libs/x64

** help
No man pages found in package  'testPackage'
*** installing help indices
** building package indices
** testing if installed package can be loaded from temporary location
*** arch - i386
*** arch - x64
** testing if installed package can be loaded from final location
*** arch - i386
*** arch - x64
** testing if installed package keeps a record of temporary installation path
* DONE (testPackage)
Making 'packages.html' ... done

> On 12/03/2020, at 4:33 AM, Wang Jiefei  wrote:
> 
> Thanks a lot for your suggestions. I see what you mean. I have removed all
> unnecessary files and dependences on https://github.com/Jiefei-Wang/example,
> but still no luck. I've tried to install the package as a user, not admin,
> but I got the same error. Also, I apologize for spamming the mail list. I
> will keep my reply as neat as possible.
> 
> Martin has suggested checking the encoding of the file and locale in the
> session info, so here is this missing information: The makefile is encoded
> in UTF-8, and the locale is:
> 
> [1] LC_COLLATE=English_United States.1252
> [2] LC_CTYPE=English_United States.1252
> [3] LC_MONETARY=English_United States.1252
> [4] LC_NUMERIC=C
> [5] LC_TIME=English_United States.1252
> 
> That is where I am stuck, any help would be appreciated.
> 
> Best,
> Jiefei
> 
> 
> 
> On Wed, Mar 11, 2020 at 9:56 AM Tomas Kalibera 
> wrote:
> 
>> On 3/11/20 2:26 PM, Wang Jiefei wrote:
>> 
>> Thanks, Tomas. I took your suggestion and change the make file to
>> 
>> test1:=$(shell $(R_HOME)/bin/R --slave -e 'runif(3)')
>> 
>> all: testPackage.dll
>>echo "test1 is $(test1)"
>>echo "R_HOME is $(R_HOME)"
>> 
>> However, R CMD INSTALL still gives me the same error:
>> 
>>> R CMD INSTALL testPackage_1.0.tar.gz* installing to library 'C:/Program
>> Files/R/R-devel/library'
>> * installing *source* package 'testPackage' ...
>> ** using staged installation
>> ** libs
>> 
>> *** arch - i386
>> The filename, directory name, or volume label syntax is incorrect.
>> c:/Rtools/mingw_32/bin/g++ -std=gnu++11  -I"C:/PROGRA~1/R/R-devel/include"
>> -DNDEBUG  -I'C:/Program Files/R/R-devel/library/Rcpp/include'
>> -I"C:/projects/BUILD/R-source-win32/extsoft/include" -O2 -Wall
>> -mfpmath=sse -msse2 -c RcppExports.cpp -o RcppExports.o
>> c:/Rtools/mingw_32/bin/g++ -std=gnu++11  -I"C:/PROGRA~1/R/R-devel/include"
>> -DNDEBUG  -I'C:/Program Files/R/R-devel/library/Rcpp/include'
>> -I"C:/projects/BUILD/R-source-win32/extsoft/include" -O2 -Wall
>> -mfpmath=sse -msse2 -c example.cpp -o example.o
>> c:/Rtools/mingw_32/bin/g++ -std=gnu++11 -shared -s -static-libgcc -o
>> testPackage.dll tmp.def RcppExports.o example.o
>> -LC:/projects/BUILD/R-source-win32/extsoft/lib/i386
>> -LC:/projects/BUILD/R-source-win32/extsoft/lib
>> -LC:/PROGRA~1/R/R-devel/bin/i386 -lR
>> echo "test1 is "
>> test1 is
>> echo "R_HOME is C:/PROGRA~1/R/R-devel"
>> installing to C:/Program
>> Files/R/R-devel/library/00LOCK-testPackage/00new/testPackage/libs/i386
>> 
>> 
>> I have no idea how to make the example even more minimal for there is
>> literally nothing in the package now. Like you said if R just sets R_HOME
>> and runs "make", I do not understand why it cannot find R in this case for
>> R_HOME seems correct to me. I think there are some other things behind R
>> CMD INSTALL but my poor knowledge does not allow me to see them...Any help
>> will be appreciated.
>> 
>> Please lets not spam the whole list with this any more - this is also why
>> I didn't add R-devel to cc originally. The makefile may be minimal, but the
>> example package is not - you have Rcpp dependency there, two C source
>> files, some R Studio specific thing (an .Rproj file at least). Maybe it is
>> not related, but if you want other to help you, it would be nice to spend
>> some of your time reducing it anyway.
>> 
>> That test1 is empty means that executing R has failed. You need to find
>> out why.
>> 
>> I see that you are installing into C:/Program Files/R/R-devel/library.

Re: [Rd] pipe(): input to, and output from, a single process

2020-03-16 Thread Simon Urbanek
FWIW if you're on unix, you can use named pipes (fifos) for that:

> system("mkfifo my.output")
> p = pipe("sed -l s:hello:oops: > my.output", "w")
> i = file("my.output", "r", blocking=FALSE, raw=TRUE)
> writeLines("hello!\n", p)
> flush(p)
> readLines(i, 1)
[1] "oops!"

Cheers,
Simon



> On 14/03/2020, at 6:26 AM, Greg Minshall  wrote:
> 
> hi.  i'd like to instantiate sed(1), send it some input, and retrieve
> its output, all via pipes (rather than an intermediate file).
> 
> my sense from pipe and looking at the sources (sys-unix.c) is that is
> not possible.  is that true?  are there any thoughts of providing such a
> facility?
> 
> cheers, Greg
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] help with rchk warnings on Rf_eval(Rf_lang2(...))

2020-03-23 Thread Simon Urbanek
Ben,

yes, you have to store the result into a variable, then unprotect, then return.

Cheers,
S


> On 24/03/2020, at 10:07 AM, Ben Bolker  wrote:
> 
> 
> Thanks, that's really useful.  One more question for you, or someone
> else here:
> 
> const ArrayXd glmLink::linkFun(const ArrayXd& mu) const {
>return as(::Rf_eval(::Rf_lang2(as(d_linkFun),
> 
> as(Rcpp::NumericVector(mu.data(),
> 
> mu.data() + mu.size()))
>   ), d_rho);
>}
> 
> 
> I guess I need that to read
> PROTECT(::Rf_eval(PROTECT(::Rf_lang2(...),...) , but as written it
> doesn't seem I have anywhere to squeeze in an UNPROTECT(2).  Do I need
> to define a temporary variable so I can UNPROTECT(2) before I return the
> value?
> 
> Or is there a way I can use Shield() since this an Rcpp-based project
> anyway?
> 
>  Sorry for all the very basic questions, but I'm flying nearly blind
> here ...
> 
>  cheers
>   Ben Bolker
> 
> 
> 
> On 2020-03-23 4:01 p.m., Tomas Kalibera wrote:
>> On 3/23/20 8:39 PM, Ben Bolker wrote:
>>> Dear r-devel folks,
>>> 
>>>[if this is more appropriate for r-pkg-devel please let me know and
>>> I'll repost it over there ...]
>>> 
>>> I'm writing to ask for help with some R/C++ integration idioms that are
>>> used in a package I'm maintaining, that are unfamilar to me, and that
>>> are now being flagged as problematic by Tomas Kalibera's 'rchk'
>>> machinery (https://github.com/kalibera/rchk); results are here
>>> https://raw.githubusercontent.com/kalibera/cran-checks/master/rchk/results/lme4.out
>>> 
>>> 
>>> The problem is with constructions like
>>> 
>>> ::Rf_eval(::Rf_lang2(fun, arg), d_rho)
>>> 
>>> I *think* this means "construct a two-element pairlist from fun and arg,
>>> then evaluate it within expression d_rho"
>>> 
>>> This leads to warnings like
>>> 
>>> "calling allocating function Rf_eval with argument allocated using
>>> Rf_lang2"
>>> 
>>> Is this a false positive or ... ? Can anyone help interpret this?
>> This is a true error. You need to protect the argument of eval() before
>> calling eval, otherwise eval() could destroy it before using it. This is
>> a common rule: whenever passing an argument to a function, that argument
>> must be protected (directly or indirectly). Rchk tries to be smart and
>> doesn't report a warning when it can be sure that in that particular
>> case, for that particular function, it is safe. This is easy to fix,
>> just protect the result of lang2() before the call and unprotect (some
>> time) after.
>>> Not sure why this idiom was used in the first place: speed? (e.g., see
>>> https://stat.ethz.ch/pipermail/r-devel/2019-June/078020.html ) Should I
>>> be rewriting to avoid Rf_eval entirely in favor of using a Function?
>>> (i.e., as commented in
>>> https://stackoverflow.com/questions/37845012/rcpp-function-slower-than-rf-eval
>>> 
>>> : "Also, calling Rf_eval() directly from a C++ context is dangerous as R
>>> errors (ie, C longjmps) will bypass the destructors of C++ objects and
>>> leak memory / cause undefined behavior in general. Rcpp::Function tries
>>> to make sure that doesn't happen.")
>> 
>> Yes, eval (as well as lang2) can throw an error, this error has to be
>> caught via R API and handled (e.g. by throwing as exception or something
>> else, indeed that exception then needs to be caught and possibly
>> converted back when leaving again to C stack frames). An R/C API you can
>> use here is R_UnwindProtect. This is of course a bit of a pain, and one
>> does not have to worry when programming in plain C.
>> 
>> I suppose Rcpp provides some wrapper around R_UnwindProtect, that would
>> be a question for Rcpp experts/maintainers.
>> 
>> Best
>> Tomas
>> 
>>> 
>>>   Any tips, corrections, pointers to further documentation, etc. would be
>>> most welcome ... Web searching for this stuff hasn't gotten me very far,
>>> and it seems to be deeper than most of the introductory material I can
>>> find (including the Rcpp vignettes) ...
>>> 
>>>cheers
>>> Ben Bolker
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
>> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Rebuilding and re-checking of downstream dependencies on CRAN Mac build machines

2020-03-26 Thread Simon Urbanek
Winston,

the Mac CRAN build builds a package only if either is true:
1) the package has not passed checks
2) there is a new version of the package since last successful build+check

The old build machine doesn't have the capacity to do full re-builds (it would 
take over 24h - currently the nightly build of packages takes 16-22 hours), but 
we're currently building a new setup for R 4.0.0 on new hardware and as a part 
of it we are planning to setup a "mac-builder" similar to what is currently 
available for Windows.

That said, I have run httpuv by hand on the CRAN build machine (against Rcpp 
1.0.4) and I saw no issues. I have seen the discussion on Rcpp, but so far no 
one actually posted details of what is breaking (nor do your links include any 
actual details on this). I'd love to help, but the lack fo a useful report 
makes this impossible. If you have any actual leads, please post them. The CRAN 
machine uses the tools that are available on CRAN: 
https://cran.r-project.org/bin/macosx/tools/ (clang-7 and gfortran-6.1 for 
3.6.x)

Cheers,
Simon


> On 27/03/2020, at 7:38 AM, Winston Chang  wrote:
> 
> I have two questions about the CRAN machines that build binary
> packages for Mac. When a new version of a package is released,
>  (A) Do the downstream dependencies get re-checked?
>  (B) Do the downstream dependencies get re-built?
> 
> I have heard (but do not know for sure) that the answer to (A) is no,
> the downstream dependencies do not get rechecked.
> 
> From publicly available information on the CRAN web server, it looks
> like the answer to (B) is also no, the downstream dependencies do not
> get rebuilt. Looking at
> https://www.r-project.org/nosvn/R.check/r-release-osx-x86_64/, I see
> the following dates for these binary packages:
> 
> - Rcpp_1.0.4.tgz: 2020-03-18
> - httpuv_1.5.2.tgz: 2019-09-12
> - dplyr_0.8.5.tgz: 2020-03-08
> 
> Rcpp was released recently, and httpuv and dplyr (which are downstream
> dependencies of Rcpp) have older dates, which indicates that these
> binary packages were not rebuilt when Rcpp was released.
> 
> In my particular case, I'm interested in the httpuv package (which I
> maintain). I and several others have not been able to get the CRAN
> version of httpuv to compile using the CRAN version of Rcpp on Mac.
> (It seems to compile fine on other platforms.) I have heard from
> maintainers of other Rcpp-dependent packages that they also can't get
> their packages to compile on Mac, using both the default Mac compiler
> toolchain and the CRAN-recommended toolchain, which uses clang 7.
> 
> For more technical details about the cause of breakage, see:
> https://github.com/RcppCore/Rcpp/issues/1060
> https://github.com/rstudio/httpuv/issues/260
> 
> If the CRAN Mac build machine is indeed able to build httpuv against
> the current version of Rcpp, it would be really helpful to have more
> information about the system configuration. If it is not able to
> rebuild httpuv and other packages against Rcpp, then this is a
> problem. Among other things, it prevents people from building their
> packages from source using CRAN versions of packages, and it also
> means that none of these packages can release a new version, because
> the binaries can't be built on Mac.
> 
> -Winston
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] status of Java & rJava?

2020-03-28 Thread Simon Urbanek
Spencer,

you could argue that Java is dead since Oracle effectively killed it by 
removing all public downloads, but if you manage to get hold of a Java 
installation then it works just fine with R. To my best knowledge there has 
never been an issue if you installed rJava from source. macOS Catalina has made 
binary distributions impossible due to additional restrictions on run-time, but 
even that has been how solved with the release of rJava 0.9-12, so please make 
sure you use the latest rJava. In most cases that I have seen issues were 
caused by incorrect configuration (setting JAVA_HOME incorrectly [do NOT set it 
unless you know what you're doing!], not installing Java for the same 
architecture as R etc.). If you have any issues feel free to report them. rJava 
0.9-12 has quite a few changes that try to detect user errors better and report 
them so I strongly suggest users to upgrade.

Cheers,
Simon


> On 29/03/2020, at 9:18 AM, Spencer Graves  wrote:
> 
> Hello, All:
> 
> 
>   Is Java being deprecated for R?
> 
> 
>   I ask, because I've been unable to get rJava 0.9-11 to work under 
> either macOS 10.15 or Windows 10, and I can't get rJava 0.9-12 to install -- 
> and my Ecfun package uses it:   I can't get "R CMD build Ecfun" to work on my 
> Mac nor "R CMD check Ecfun_0.2-4" under Windows.  Travis CI builds 
> "https://github.com/sbgraves237/Ecfun"; just fine.
> 
> 
>   The rJava maintainer, Simon Urbanek, has kindly responded to two of my 
> three emails on this since 2020-03-20, but I've so far been unable to 
> translate his suggestions into fixes for these problems.
> 
> 
>   Should I remove rJava from Ecfun and see what breaks, then see if I can 
> work around that?  Should I provide the error messages I get for rJava from 
> "update.packages()" and / or library(rJava) on both machines, with 
> sessionInfo() to this list or to Stack Exchange or Stack Overflow?
> 
> 
>   Since I'm getting so many problems with rJava on under both macOS and 
> Windows 10, that suggests to me that potential users could have similar 
> problems, and I should try to remove rJava from Ecfun.
> 
> 
>   What do you think?
>   Thanks,
>   Spencer Graves
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] [BULK] Re: status of Java & rJava?

2020-03-28 Thread Simon Urbanek
pQbnYkA/R.INSTALL8ec5478248a/rJava/src/jvm-w32'
> c:/Rtools/mingw_64/bin/dlltool --as c:/Rtools/mingw_64/bin/as --input-def 
> jvm64.def --kill-at --dllname jvm.dll --output-lib libjvm.dll.a
> c:/Rtools/mingw_64/bin/gcc  -O2 -c -o findjava.o findjava.c
> c:/Rtools/mingw_64/bin/gcc  -s -o findjava.exe findjava.o
> make: Leaving directory 
> '/Users/spenc/AppData/Local/Temp/RtmpQbnYkA/R.INSTALL8ec5478248a/rJava/src/jvm-w32'
> Find Java...
>   JAVA_HOME=C:/PROGRA~1/Java/JRE18~1.0_2
> === Building JRI ===
>   JAVA_HOME=C:/PROGRA~1/Java/JRE18~1.0_2
>   R_HOME=C:/PROGRA~1/R/R-36~1.3
> JDK has no javah.exe - using javac -h . instead
> Creating Makefiles ...
> Configuration done.
> make -C src JRI.jar
> make[1]: Entering directory 
> '/Users/spenc/AppData/Local/Temp/RtmpQbnYkA/R.INSTALL8ec5478248a/rJava/jri/src'
> C:/PROGRA~1/Java/JRE18~1.0_2/bin/javac -h . -d . ../RList.java ../RBool.java 
> ../RVector.java ../RMainLoopCallbacks.java ../RConsoleOutputStream.java 
> ../Mutex.java ../Rengine.java ../REXP.java ../RFactor.java 
> ../package-info.java
> sh: C:/PROGRA~1/Java/JRE18~1.0_2/bin/javac: No such file or directory
> make[1]: *** [Makefile.all:41: org/rosuda/JRI/Rengine.class] Error 127
> make[1]: Leaving directory 
> '/Users/spenc/AppData/Local/Temp/RtmpQbnYkA/R.INSTALL8ec5478248a/rJava/jri/src'
> make: *** [Makefile.all:19: src/JRI.jar] Error 2
>  WARNING: JRI could NOT be built
> Set IGNORE=1 if you want to build rJava anyway.
> ERROR: configuration failed for package 'rJava'
> * removing 'C:/Program Files/R/R-3.6.3/library/rJava'
> * restoring previous 'C:/Program Files/R/R-3.6.3/library/rJava'
> 
> The downloaded source packages are in
> 'C:\Users\spenc\AppData\Local\Temp\RtmpsDQIkn\downloaded_packages'
> Warning message:
> In install.packages(update[instlib == l, "Package"], l, repos = repos,  :
>   installation of package 'rJava' had non-zero exit status
> 
> > sessionInfo()
> R version 3.6.3 (2020-02-29)
> Platform: x86_64-w64-mingw32/x64 (64-bit)
> Running under: Windows 10 x64 (build 18362)
> 
> Matrix products: default
> 
> locale:
> [1] LC_COLLATE=English_United States.1252
> [2] LC_CTYPE=English_United States.1252
> [3] LC_MONETARY=English_United States.1252
> [4] LC_NUMERIC=C
> [5] LC_TIME=English_United States.1252
> 
> attached base packages:
> [1] stats graphics  grDevices utils
> [5] datasets  methods   base
> 
> loaded via a namespace (and not attached):
> [1] compiler_3.6.3 tools_3.6.3
> 
> 
> On 2020-03-28 22:07, Simon Urbanek wrote:
>> Spencer,
>> 
>> you could argue that Java is dead since Oracle effectively killed it by 
>> removing all public downloads, but if you manage to get hold of a Java 
>> installation then it works just fine with R. To my best knowledge there has 
>> never been an issue if you installed rJava from source. macOS Catalina has 
>> made binary distributions impossible due to additional restrictions on 
>> run-time, but even that has been how solved with the release of rJava 
>> 0.9-12, so please make sure you use the latest rJava. In most cases that I 
>> have seen issues were caused by incorrect configuration (setting JAVA_HOME 
>> incorrectly [do NOT set it unless you know what you're doing!], not 
>> installing Java for the same architecture as R etc.). If you have any issues 
>> feel free to report them. rJava 0.9-12 has quite a few changes that try to 
>> detect user errors better and report them so I strongly suggest users to 
>> upgrade.
>> 
>> Cheers,
>> Simon
>> 
>> 
>>> On 29/03/2020, at 9:18 AM, Spencer Graves  
>>> wrote:
>>> 
>>> Hello, All:
>>> 
>>> 
>>>   Is Java being deprecated for R?
>>> 
>>> 
>>>   I ask, because I've been unable to get rJava 0.9-11 to work under 
>>> either macOS 10.15 or Windows 10, and I can't get rJava 0.9-12 to install 
>>> -- and my Ecfun package uses it:   I can't get "R CMD build Ecfun" to work 
>>> on my Mac nor "R CMD check Ecfun_0.2-4" under Windows.  Travis CI builds 
>>> "https://github.com/sbgraves237/Ecfun"; just fine.
>>> 
>>> 
>>>   The rJava maintainer, Simon Urbanek, has kindly responded to two of 
>>> my three emails on this since 2020-03-20, but I've so far been unable to 
>>> translate his suggestions into fixes for these problems.
>>> 
>>> 
>>>   Should I remove rJava from Ecfun and see what breaks, then see if I 
>>> can work around that?  Should I provide the error messages I get for rJava 
>>> from "update.packages()" and / or library(rJava) on both machines, with 
>>> sessionInfo() to this list or to Stack Exchange or Stack Overflow?
>>> 
>>> 
>>>   Since I'm getting so many problems with rJava on under both macOS and 
>>> Windows 10, that suggests to me that potential users could have similar 
>>> problems, and I should try to remove rJava from Ecfun.
>>> 
>>> 
>>>   What do you think?
>>>   Thanks,
>>>   Spencer Graves
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggestion: "." in [lsv]apply()

2020-04-16 Thread Simon Urbanek
Serguei,


> On 17/04/2020, at 2:24 AM, Sokol Serguei  wrote:
> 
> Hi,
> 
> I would like to make a suggestion for a small syntactic modification of FUN 
> argument in the family of functions [lsv]apply(). The idea is to allow 
> one-liner expressions without typing "function(item) {...}" to surround them. 
> The argument to the anonymous function is simply referred as ".". Let take an 
> example. With this new feature, the following call
> 
> sapply(split(mtcars, mtcars$cyl), function(d) summary(lm(mpg ~ wt, 
> d))$r.squared)
> #4 6 8
> #0.5086326 0.4645102 0.4229655
> 
> 
> could be rewritten as
> 
> sapply(split(mtcars, mtcars$cyl), summary(lm(mpg ~ wt, .))$r.squared)
> 
> "Not a big saving in typing" you can say but multiplied by the number of 
> [lsv]apply usage and a neater look, I think, the idea merits to be considered.


It's not in any way "neater", not only is it less readable, it's just plain 
wrong. What if the expression returned a function? How do you know that you 
don't want to apply the result of the call? For the same reason the 
implementation below won't work - very often you just pass a symbol that 
evaluates to a function and always en expression that returns a function and 
there is no way to distinguish that from your new proposed syntax. When you 
feel compelled to use substitute() you should hear alarm bells that something 
is wrong ;).

You can certainly write a new function that uses a different syntax (and I'm 
sure someone has already done that in the package space), but what you propose 
is incompatible with *apply in R (and very much not R syntax).

Cheers,
Simon


> To illustrate a possible implementation, I propose a wrapper example for 
> sapply():
> 
> wsapply=function(l, fun, ...) {
> s=substitute(fun)
> if (is.name(s) || is.call(s) && s[[1]]==as.name("function")) {
> sapply(l, fun, ...) # legacy call
> } else {
> sapply(l, function(d) eval(s, list(.=d)), ...)
> }
> }
> 
> Now, we can do:
> 
> wsapply(split(mtcars, mtcars$cyl), summary(lm(mpg ~ wt, .))$r.squared)
> 
> or, traditional way:
> 
> wsapply(split(mtcars, mtcars$cyl), function(d) summary(lm(mpg ~ wt, 
> d))$r.squared)
> 
> the both work.
> 
> How do you feel about that?
> 
> Best,
> Serguei.
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] suggestion: "." in [lsv]apply()

2020-04-16 Thread Simon Urbanek
Sergei,

the main problem that I was pointing out is is that there is no way you can 
introduce the new syntax without breaking the old one. The expression is 
evaluated to obtain a function, so by definition using anything that results in 
a valid expression for your syntax will break. E.g., using sapply(x, (foo)) is 
completely valid so you can't just change the evaluation of the expression to 
something different (which is what you're doing). As people were pointing out 
there are many ways to do this if you change the syntax.

I'm not arguing against the principle, I'm arguing about your particular 
proposal as it is inconsistent and not general. Personally, I find the current 
syntax much clearer and readable (defining anything by convention like . being 
the function variable seems arbitrary and "dirty" to me), but if you wanted to 
define a shorter syntax, you could use something like x ~> i + x. That said, I 
really don't see the value of not using function(x) [especially these days when 
people are arguing for long variable names with the justification that IDEs do 
all the work anyway], but as I said, my argument was against the actual 
proposal, not general ideas about syntax improvement.

Cheers,
Simon


> On 17/04/2020, at 3:53 AM, Sokol Serguei  wrote:
> 
> Simon,
> 
> Thanks for replying. In what follows I won't try to argue (I understood that 
> you find this a bad idea) but I would like to make clearer some of your point 
> for me (and may be for others).
> 
> Le 16/04/2020 à 16:48, Simon Urbanek a écrit :
>> Serguei,
>>> On 17/04/2020, at 2:24 AM, Sokol Serguei  wrote: 
>>> Hi, I would like to make a suggestion for a small syntactic modification of 
>>> FUN argument in the family of functions [lsv]apply(). The idea is to allow 
>>> one-liner expressions without typing "function(item) {...}" to surround 
>>> them. The argument to the anonymous function is simply referred as ".". Let 
>>> take an example. With this new feature, the following call 
>>> sapply(split(mtcars, mtcars$cyl), function(d) summary(lm(mpg ~ wt, 
>>> d))$r.squared) # 4 6 8 #0.5086326 0.4645102 0.4229655 could be rewritten as 
>>> sapply(split(mtcars, mtcars$cyl), summary(lm(mpg ~ wt, .))$r.squared) "Not 
>>> a big saving in typing" you can say but multiplied by the number of 
>>> [lsv]apply usage and a neater look, I think, the idea merits to be 
>>> considered. 
>> It's not in any way "neater", not only is it less readable, it's just plain 
>> wrong. What if the expression returned a function?
> do you mean like in
> l=sapply(1:3, function(i) function(x) i+x)
> l[[1]](3)
> # 4
> l[[2]](3)
> # 5
> 
> This is indeed a corner case but a pair of () or {} can keep wsapply() in 
> course:
> l=wsapply(1:3, (function(x) .+x))
> 
> l[[1]](3)
> 
> # 4
> 
> l[[2]](3)
> 
> # 5
>> How do you know that you don't want to apply the result of the call?
> A small example (if it is significantly different from the one above) would 
> be very helpful for me to understand this point.
> 
>> For the same reason the implementation below won't work - very often you 
>> just pass a symbol that evaluates to a function and always en expression 
>> that returns a function and there is no way to distinguish that from your 
>> new proposed syntax.
> Even with () or {} around such "dotted" expression?
> 
> Best,
> Serguei.
> 
>> When you feel compelled to use substitute() you should hear alarm bells that 
>> something is wrong ;). You can certainly write a new function that uses a 
>> different syntax (and I'm sure someone has already done that in the package 
>> space), but what you propose is incompatible with *apply in R (and very much 
>> not R syntax). Cheers, Simon
>>> To illustrate a possible implementation, I propose a wrapper example for 
>>> sapply(): wsapply=function(l, fun, ...) { s=substitute(fun) if (is.name(s) 
>>> || is.call(s) && s[[1]]==as.name("function")) { sapply(l, fun, ...) # 
>>> legacy call } else { sapply(l, function(d) eval(s, list(.=d)), ...) } } 
>>> Now, we can do: wsapply(split(mtcars, mtcars$cyl), summary(lm(mpg ~ wt, 
>>> .))$r.squared) or, traditional way: wsapply(split(mtcars, mtcars$cyl), 
>>> function(d) summary(lm(mpg ~ wt, d))$r.squared) the both work. How do you 
>>> feel about that? Best, Serguei. 
>>> __ R-devel@r-project.org 
>>> mailing list https://stat.ethz.ch/mailman/listinfo/r-devel 
>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R not running under lldb? (osx)

2020-04-21 Thread Simon Urbanek
Tim,

as a security precaution Apple has disabled the ability to debug notarized 
applications*. It means any software distributed on macOS Catalina (and they 
may have retro-actively enabled it for recent updates of Mojave) cannot be run 
in a debugger.

If you want to debug R, you have to use non-release binaries that are not 
notarized and install them by hand, e.g.:

curl -L 
http://mac.r-project.org/high-sierra/R-4.0-branch/x86_64/R-4.0-branch.tar.gz | 
tar fxz - -C /

Of course, this disables the Apple protections and thus is not recommended for 
casual users. 

Cheers,
Simon

* - more technical details: Apple requires notarization of any application that 
will be distributed via an Apple installer. Apple no longer allows installation 
of macOS applications that are not notarized. In order to obtain notarization, 
the application has to be fully signed, has to use hardened run-time and may 
not enable debugging entitlements. One part of the hardened run-time is that no 
debugger is allowed to attach to the application.


> On 22/04/2020, at 8:59 AM, Tim Keitt  wrote:
> 
> I see:
> 
> Tims-Air:~ tkeitt$ R --version
> 
> R version 3.6.3 (2020-02-29) -- "Holding the Windsock"
> 
> Copyright (C) 2020 The R Foundation for Statistical Computing
> 
> Platform: x86_64-apple-darwin15.6.0 (64-bit)
> 
> 
> R is free software and comes with ABSOLUTELY NO WARRANTY.
> 
> You are welcome to redistribute it under the terms of the
> 
> GNU General Public License versions 2 or 3.
> 
> For more information about these matters see
> 
> https://www.gnu.org/licenses/.
> 
> 
> Tims-Air:~ tkeitt$ R -d lldb
> 
> (lldb) target create "/Library/Frameworks/R.framework/Resources/bin/exec/R"
> 
> Current executable set to
> '/Library/Frameworks/R.framework/Resources/bin/exec/R' (x86_64).
> 
> (lldb) run --vanilla
> 
> error: process exited with status -1 (Error 1)
> 
> Never happened before. Is this a known issue?
> 
> Thanks.
> 
> THK
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R not running under lldb? (osx)

2020-04-21 Thread Simon Urbanek
Tim,

sure, make sense (it's also easier to use ASAN and friends that way). The only 
issue is that it won't work for mcOS-specific bugs.

Cheers,
Simon



> On 22/04/2020, at 3:55 PM, Tim Keitt  wrote:
> 
> Thanks Simon. I'll probably just switch to rocker when needing to debug in 
> that case.
> 
> THK
> 
> On Tue, Apr 21, 2020 at 6:51 PM Simon Urbanek  
> wrote:
> Tim,
> 
> as a security precaution Apple has disabled the ability to debug notarized 
> applications*. It means any software distributed on macOS Catalina (and they 
> may have retro-actively enabled it for recent updates of Mojave) cannot be 
> run in a debugger.
> 
> If you want to debug R, you have to use non-release binaries that are not 
> notarized and install them by hand, e.g.:
> 
> curl -L 
> http://mac.r-project.org/high-sierra/R-4.0-branch/x86_64/R-4.0-branch.tar.gz 
> | tar fxz - -C /
> 
> Of course, this disables the Apple protections and thus is not recommended 
> for casual users. 
> 
> Cheers,
> Simon
> 
> * - more technical details: Apple requires notarization of any application 
> that will be distributed via an Apple installer. Apple no longer allows 
> installation of macOS applications that are not notarized. In order to obtain 
> notarization, the application has to be fully signed, has to use hardened 
> run-time and may not enable debugging entitlements. One part of the hardened 
> run-time is that no debugger is allowed to attach to the application.
> 
> 
> > On 22/04/2020, at 8:59 AM, Tim Keitt  wrote:
> > 
> > I see:
> > 
> > Tims-Air:~ tkeitt$ R --version
> > 
> > R version 3.6.3 (2020-02-29) -- "Holding the Windsock"
> > 
> > Copyright (C) 2020 The R Foundation for Statistical Computing
> > 
> > Platform: x86_64-apple-darwin15.6.0 (64-bit)
> > 
> > 
> > R is free software and comes with ABSOLUTELY NO WARRANTY.
> > 
> > You are welcome to redistribute it under the terms of the
> > 
> > GNU General Public License versions 2 or 3.
> > 
> > For more information about these matters see
> > 
> > https://www.gnu.org/licenses/.
> > 
> > 
> > Tims-Air:~ tkeitt$ R -d lldb
> > 
> > (lldb) target create "/Library/Frameworks/R.framework/Resources/bin/exec/R"
> > 
> > Current executable set to
> > '/Library/Frameworks/R.framework/Resources/bin/exec/R' (x86_64).
> > 
> > (lldb) run --vanilla
> > 
> > error: process exited with status -1 (Error 1)
> > 
> > Never happened before. Is this a known issue?
> > 
> > Thanks.
> > 
> > THK
> > 
> >   [[alternative HTML version deleted]]
> > 
> > __
> > R-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-devel
> > 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] mclapply returns NULLs on MacOS when running GAM

2020-04-28 Thread Simon Urbanek
Sorry, the code works perfectly fine for me in R even for 1e6 observations (but 
I was testing with R 4.0.0). Are you using some kind of GUI?

Cheers,
Simon


> On 28/04/2020, at 8:11 PM, Shian Su  wrote:
> 
> Dear R-devel,
> 
> I am experiencing issues with running GAM models using mclapply, it fails to 
> return any values if the data input becomes large. For example here the code 
> runs fine with a df of 100 rows, but fails at 1000.
> 
> library(mgcv)
> library(parallel)
> 
>> df <- data.frame(
> + x = 1:100,
> + y = 1:100
> + )
>> 
>> mclapply(1:2, function(i, df) {
> + fit <- gam(y ~ s(x, bs = "cs"), data = df)
> + },
> + df = df,
> + mc.cores = 2L
> + )
> [[1]]
> 
> Family: gaussian
> Link function: identity
> 
> Formula:
> y ~ s(x, bs = "cs")
> 
> Estimated degrees of freedom:
> 9  total = 10
> 
> GCV score: 0
> 
> [[2]]
> 
> Family: gaussian
> Link function: identity
> 
> Formula:
> y ~ s(x, bs = "cs")
> 
> Estimated degrees of freedom:
> 9  total = 10
> 
> GCV score: 0
> 
>> 
>> 
>> df <- data.frame(
> + x = 1:1000,
> + y = 1:1000
> + )
>> 
>> mclapply(1:2, function(i, df) {
> + fit <- gam(y ~ s(x, bs = "cs"), data = df)
> + },
> + df = df,
> + mc.cores = 2L
> + )
> [[1]]
> NULL
> 
> [[2]]
> NULL
> 
> There is no error message returned, and the code runs perfectly fine in 
> lapply.
> 
> I am on a MacBook 15 (2016) running MacOS 10.14.6 (Mojave) and R version 
> 3.6.2. This bug could not be reproduced on my Ubuntu 19.10 running R 3.6.1.
> 
> Kind regards,
> Shian Su
> 
> Shian Su
> PhD Student, Ritchie Lab 6W, Epigenetics and Development
> Walter & Eliza Hall Institute of Medical Research
> 1G Royal Parade, Parkville VIC 3052, Australia
> 
> 
> ___
> 
> The information in this email is confidential and =\ i...{{dropped:8}}

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] mclapply returns NULLs on MacOS when running GAM

2020-04-28 Thread Simon Urbanek
Do NOT use mcparallel() in packages except as a non-default option that user 
can set for the reasons Henrik explained. Multicore is intended for HPC 
applications that need to use many cores for computing-heavy jobs, but it does 
not play well with RStudio and more importantly you don't know the resource 
available so only the user can tell you when it's safe to use. Multi-core 
machines are often shared so using all detected cores is a very bad idea. The 
user should be able to explicitly enable it, but it should not be enabled by 
default.

As for parallelism, it depends heavily on your use-case. Native parallelism is 
preferred (threads, OpenMP, ...) and I assume you're not talking about that as 
that is always the first option. Multicore works well in cases where there is 
no easy native solution and you need to share a lot of data for small results. 
If the data is small, or you need to read it first, then other methods like 
PSOCK may be preferable. In any case, parallelization only makes sense for code 
that you know will take a long time to run.

Cheers,
Simon


> On 29/04/2020, at 11:54 AM, Shian Su  wrote:
> 
> Thanks Henrik,
> 
> That clears things up significantly. I did see the warning but failed to 
> include it my initial email. It sounds like an RStudio issue, and it seems 
> like that it’s quite intrinsic to how forks interact with RStudio. Given this 
> code is eventually going to be a part of a package, should I expect it to 
> fail mysteriously in RStudio for my users? Is the best solution here to 
> migrate all my parallelism to PSOCK for the foreseeable future?
> 
> Thanks,
> Shian
> 
>> On 29 Apr 2020, at 2:08 am, Henrik Bengtsson  
>> wrote:
>> 
>> Hi, a few comments below.
>> 
>> First, from my experience and troubleshooting similar reports from
>> others, a returned NULL from parallel::mclapply() is often because the
>> corresponding child process crashed/died. However, when this happens
>> you should see a warning, e.g.
>> 
>>> y <- parallel::mclapply(1:2, FUN = function(x) if (x == 2) quit("no") else 
>>> x)
>> Warning message:
>> In parallel::mclapply(1:2, FUN = function(x) if (x == 2) quit("no") else x) :
>> scheduled core 2 did not deliver a result, all values of the job
>> will be affected
>>> str(y)
>> List of 2
>> $ : int 1
>> $ : NULL
>> 
>> This warning is produces on R 4.0.0 and R 3.6.2 in Linux, but I would
>> assume that warning is also produced on macOS.  It's not clear from
>> you message whether you also got that warning or not.
>> 
>> Second, forked processing, as used by parallel::mclapply(), is advised
>> against when using the RStudio Console [0].  Unfortunately, there's no
>> way to disable forked processing in R [1].  You could add the
>> following to your ~/.Rprofile startup file:
>> 
>> ## Warn when forked processing is used in the RStudio Console
>> if (Sys.getenv("RSTUDIO") == "1" && !nzchar(Sys.getenv("RSTUDIO_TERM"))) {
>> invisible(trace(parallel:::mcfork, tracer =
>> quote(warning("parallel::mcfork() was used. Note that forked
>> processes, e.g. parallel::mclapply(), may be unstable when used from
>> the RStudio Console
>> [https://github.com/rstudio/rstudio/issues/2597#issuecomment-482187011]";,
>> call.=FALSE
>> }
>> 
>> to detect when forked processed is used in the RStudio Console -
>> either by you or by some package code that you use directly or
>> indirectly.  You could even use stop() here if you wanna be
>> conservative.
>> 
>> [0] https://github.com/rstudio/rstudio/issues/2597#issuecomment-482187011
>> [1] https://stat.ethz.ch/pipermail/r-devel/2020-January/078896.html
>> 
>> /Henrik
>> 
>> On Tue, Apr 28, 2020 at 2:39 AM Shian Su  wrote:
>>> 
>>> Yes I am running on Rstudio 1.2.5033. I was also running this code without 
>>> error on Ubuntu in Rstudio. Checking again on the terminal and it does 
>>> indeed work fine even with large data.frames.
>>> 
>>> Any idea as to what interaction between Rstudio and mclapply causes this?
>>> 
>>> Thanks,
>>> Shian
>>> 
>>> On 28 Apr 2020, at 7:29 pm, Simon Urbanek 
>>> mailto:simon.urba...@r-project.org>> wrote:
>>> 
>>> Sorry, the code works perfectly fine for me in R even for 1e6 observations 
>>> (but I was testing with R 4.0.0). Are you using some kind of GUI?
>>> 
>>> Cheers,
>>> Simon
>>> 
>>> 
>>> On 28/04/2020, at 8:11 PM, Shian Su 
>>> mail

Re: [Rd] "not a valid win32 application" with rtools40-x86_65.exe on Windows 10

2020-04-29 Thread Simon Urbanek
Are you missing the 32-bit Java JDK?

Cheers,
S

> On 30/04/2020, at 4:37 PM, Spencer Graves  wrote:
> 
> Hello, All:
> 
> 
>   "00install.out" from "R CMD check Ecfun_0.2-4.tar.gz" includes:
> 
> 
> Error:  package or namespace load failed for 'Ecfun':
>  .onLoad failed in loadNamespace() for 'rJava', details
>   call: inDL(x, as.logical(local), as.logical(now), ...)
>   error:  unable to load shared object 'c:/Program 
> Files/R/R-4.0.0/library/rJava/libs/i386/rJava.dll':
>   LoadLibrary failure: ^1 is not a valid win32 application
> 
> 
>   This was after installing R 4.0.0 and "rtools40-x86_64.exe" under 
> Windows 10 Pro 64-bit.
> 
> 
>   Suggestions?
>   Thanks,
>   Spencer Graves
> 
> 
> sessionInfo()
> R version 4.0.0 (2020-04-24)
> Platform: x86_64-64-mingw32/x64 (64-bit)
> Running under: Windows 10 x64 (build 18362)
> 
> Matrix products: default
> 
> locale:
> [1] LC_COLLATE=English_United States.1252
> [2] LC_CCTYPE=English_United States.1252
> [3] LC_MONETARY=English_United States.1252
> [4] LC_NUMERIC=C
> [5] LC_TIME=English_United States.1252
> 
> attached base packages:
> [1] stats   graphics   grDevices  utils   datasets   methods   base
> 
> loaded via a namespace (and not attached):
> [1] compiler_4.0.0
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] "not a valid win32 application" with rtools40-x86_65.exe on Windows 10

2020-05-02 Thread Simon Urbanek
Spencer,

you shouldn't have anything on the PATH, the location of Java is taken from the 
registry so you only need t have a valid installation of Java. Better don't set 
PATH or JAVA_HOME as it will stop rJava from working if you get it wrong. The 
errors on Windows are confusing, the actual error is shown via GUI as pop-up, 
what they report in the console is not the real error.

Re installation from source - it sooks like you either don't have Rtools40 or 
you didn't set PATH properly. If I recall the new Rtools40 no longer set the 
PATH (for whatever reason) so you have to do it by hand and the instructions it 
gives you are not working for the command shell.

Cheers,
Simon



> On 1/05/2020, at 4:51 PM, Spencer Graves  wrote:
> 
> Hi, Jeroen et al.:
> 
> 
> On 2020-04-30 03:15, Jeroen Ooms wrote:
>> On Thu, Apr 30, 2020 at 6:38 AM Spencer Graves
>>  wrote:
>>> Hello, All:
>>> 
>>> 
>>>"00install.out" from "R CMD check Ecfun_0.2-4.tar.gz" includes:
>>> 
>>> 
>>> Error:  package or namespace load failed for 'Ecfun':
>>>   .onLoad failed in loadNamespace() for 'rJava', details
>>>call: inDL(x, as.logical(local), as.logical(now), ...)
>>>error:  unable to load shared object 'c:/Program
>>> Files/R/R-4.0.0/library/rJava/libs/i386/rJava.dll':
>>>LoadLibrary failure: ^1 is not a valid win32 application
>>> 
>> This is an error in loading the rJava package, so it is not related to
>> rtools40, and probably inappropriate for this mailing list.
>> 
>> As Simon suggested, you may have to install the 32-bit Java JDK. See
>> also this faq: 
>> https://github.com/r-windows/docs/blob/master/faq.md#how-to-install-rjava-on-windows
> 
> 
>   In fact I had both 32- and 64-bit Java installed but only the 64-bit 
> was in the path.  I added the 32-bit, but that did not fix the problem.  The 
> last 2.5 lines in the section "How to install rJava on Windows?" to which you 
> referred me reads:
> 
> 
> to build rJava from source, you need the --merge-multiarch flag:
> 
> install.packages('rJava', type = 'source', INSTALL_opts='--merge-multiarch')
> 
> 
>   When I tried that, I got:
> 
> 
> Warning in system("sh ./configure.win") : 'sh' not found
> 
> 
> *** ON THE OTHER HAND:  The error message above says 'c:/Program
> Files/R/R-4.0.0/library/rJava/libs/i386/rJava.dll':
>LoadLibrary failure: ^1 is not a valid win32 application
> 
> 
>  Is "rJava.dll" a valid win32 application?
> 
> 
>   Suggestions?
>   Thanks,
>   Spencer Graves
> 
> 
> p.s.  A similar problem with rJava a month ago was fixed by installed 64-bit 
> Java.  Now with the upgrade to R 4.0.0 and rtools40, this no longer works.
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] defining r audio connections

2020-05-07 Thread Simon Urbanek
The custom connections API was specifically introduced to allow packages to 
create custom connections back in 2013. Its sole purpose is to allow package 
authors to create new connections outside of base R, so I don't see why 
packages using it shouldn't be allowed on CRAN. However, it is solely at CRAN's 
discretion to decide what gets published, so it may be worth raising it with 
the team, ask for the reasons and what it would take for them to accept 
packages using that API.

Cheers,
Simon


> On 8/05/2020, at 1:02 AM, Jim Hester  wrote:
> 
> https://github.com/jimhester/archive was not allowed on CRAN when I
> submitted it 3 years ago due to this restriction.
> 
> Being able to write custom connections is a useful feature for a number of
> applications, I would love this policy to be reconsidered.
> 
> On Wed, May 6, 2020 at 10:30 PM Henrik Bengtsson 
> wrote:
> 
>> What's the gist of the problem of making/having this part of the public
>> API? Is it security, is it stability, is it that the current API is under
>> construction, is it a worry about maintenance load for R Core, ...? Do we
>> know why?
>> 
>> It sounds like it's a feature that is  useful. I think we missed out on
>> some great enhancements in the past because of it not being part of the
>> public API.
>> 
>> /Henrik
>> 
>> On Wed, May 6, 2020, 16:26 Martin Morgan  wrote:
>> 
>>> yep, you're right, after some initial clean-up and running with or
>> without
>>> --as-cran R CMD check gives a NOTE
>>> 
>>>  *  checking compiled code
>>>  File ‘socketeer/libs/socketeer.so’:
>>>Found non-API calls to R: ‘R_GetConnection’,
>>>   ‘R_new_custom_connection’
>>> 
>>>  Compiled code should not call non-API entry points in R.
>>> 
>>>  See 'Writing portable packages' in the 'Writing R Extensions' manual.
>>> 
>>> Connections in general seem more useful than ad-hoc functions, though
>>> perhaps for Frederick's use case Duncan's suggestion is sufficient. For
>>> non-CRAN packages I personally would implement a connection.
>>> 
>>> (I mistakenly thought this was a more specialized mailing list; I
>> wouldn't
>>> have posted to R-devel on this topic otherwise)
>>> 
>>> Martin Morgan
>>> 
>>> On 5/6/20, 4:12 PM, "Gábor Csárdi"  wrote:
>>> 
>>>AFAIK that API is not allowed on CRAN. It triggers a NOTE or a
>>>WARNING, and your package will not be published.
>>> 
>>>Gabor
>>> 
>>>On Wed, May 6, 2020 at 9:04 PM Martin Morgan <
>> mtmorgan.b...@gmail.com>
>>> wrote:
 
 The public connection API is defined in
 
 
>>> 
>> https://github.com/wch/r-source/blob/trunk/src/include/R_ext/Connections.h
 
 I'm not sure of a good pedagogic example; people who want to write
>>> their own connections usually want to do so for complicated reasons!
 
 This is my own abandoned attempt
>>> 
>> https://github.com/mtmorgan/socketeer/blob/b0a1448191fe5f79a3f09d1f939e1e235a22cf11/src/connection.c#L169-L192
>>> where connection_local_client() is called from R and _connection_local()
>>> creates and populates the appropriate structure. Probably I have done
>>> things totally wrong (e.g., by not checking the version of the API, as
>>> advised in the header file!)
 
 Martin Morgan
 
 On 5/6/20, 2:26 PM, "R-devel on behalf of Duncan Murdoch" <
>>> r-devel-boun...@r-project.org on behalf of murdoch.dun...@gmail.com>
>>> wrote:
 
On 06/05/2020 1:09 p.m., frede...@ofb.net wrote:
> Dear R Devel,
> 
> Since Linux moved away from using a file-system interface for
>>> audio, I think it is necessary to write special libraries to interface
>> with
>>> audio hardware from various languages on Linux.
> 
> In R, it seems like the appropriate datatype for a
>> `snd_pcm_t`
>>> handle pointing to an open ALSA source or sink would be a "connection".
>>> Connection types are already defined in R for "file", "url", "pipe",
>>> "fifo", "socketConnection", etc.
> 
> Is there a tutorial or an example package where a new type of
>>> connection is defined, so that I can see how to do this properly in a
>>> package?
> 
> I can see from the R source that, for example, `do_gzfile` is
>>> defined in `connections.c` and referenced in `names.c`. However, I
>> thought
>>> I should ask here first in case there is a better place to start, than
>>> trying to copy this code.
> 
> I only want an object that I can use `readBin` and `writeBin`
>>> on, to read and write audio data using e.g. `snd_pcm_writei` which is
>> part
>>> of the `alsa-lib` package.
 
I don't think R supports user-defined connections, but probably
>>> writing
readBin and writeBin equivalents specific to your library
>>> wouldn't be
any harder than creating a connection.  For those, you will
>>> probably
want to work with an "external pointer" (see Writing R
>>> Extensions).
Rcpp probably has support for these if you're working in C++.
 
Duncan Murdoch
 

Re: [Rd] GCC warning

2020-05-22 Thread Simon Urbanek
Adrian,

newer compilers are better at finding bugs - you may want to read the full 
trace of the error, it tells you that you likely have a memory overflow when 
using strncpy() in your package. You should check whether it is right. 
Unfortunately we can’t help you more specifically, because I don't see any link 
to what you submitted so can’t look at the code involved.

Cheers,
Simon



> On May 22, 2020, at 7:25 PM, Adrian Dușa  wrote:
> 
> I am trying to submit a package on CRAN, and everything passes ok on all 
> platforms but Debian, where CRAN responds with an automatic "significant" 
> warning:
> 
> * checking whether package ‘QCA’ can be installed ... [35s/35s] WARNING
> Found the following significant warnings:
>  /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: warning: 
> ‘__builtin_strncpy’ output may be truncated copying 12 bytes from a string of 
> length 79 [-Wstringop-truncation]
> See ‘/srv/hornik/tmp/CRAN/QCA.Rcheck/00install.out’ for details.
> 
> 
> I know the cause of this: using a cursomized version of some external C 
> library, coupled with  in the Description.
> 
> But I do not know hot to get past this warning, since it refers to a builtin 
> GCC function strncpy. As far as I read, this should be solved by a simple GCC 
> upgrade to the newest version, but that is something outside my code base, 
> since GCC resides on the CRAN servers.
> 
> In the meantime, to get the package published, did anyone encountered a 
> similar problem? If so, is there a workaround?
> 
> —
> Adrian Dusa
> University of Bucharest
> Romanian Social Data Archive
> Soseaua Panduri nr. 90-92
> 050663 Bucharest sector 5
> Romania
> https://adriandusa.eu
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] C Interface

2010-06-18 Thread Simon Urbanek

On Jun 18, 2010, at 10:23 AM, michael meyer  wrote:

> Greetings,
> 
> I am trying to call simple C-code from R.
> I am on Windows XP with RTools installed.
> 
> The C-function is
> 
> #include 
> #include 
> #include 
> #include 
> 
> // prevent name mangling
> extern "C" {
> 
> SEXP __cdecl test(SEXP s){
> 
>  SEXP result;
>  PROTECT(result = NEW_NUMERIC(1));
>  double* ptr=NUMERIC_POINTER(result);
>  double t = *REAL(s);
>  double u = t-floor(t)-0.5;
>  if(u>0) *ptr=-1+4*u; else *ptr=-1-4*u;
>  Rprintf("The value is %f", *ptr);
>  UNPROTECT(1);
>  return result;
> }
> 
> };
> 
> It is compiled with
> 
> R CMD SHLIB OrthoFunctions.c
> 
> with flag
> 
> MAKEFLAGS="CC=g++"
> 

That is entirely wrong - g++ is not a C compiler.

Cheers,
Simon


> 
> However when I call this code from R with
> 
> test <- function(t){
>  .Call("test",t)
> }
> dyn.load("./OrthoFunctions.dll")
> test(0)
> dyn.unload("./OrthoFunctions.dll")
> 
> then R crashes.
> 
> If I compile with the default flags (no extern "C", no __cdecl) I get an
> error message about an undefined reference to "__gxx_personality_v0":
> 
> C:\...>R CMD SHLIB OrthoFunctions.c
> C:/Programme/R/R-2.10.1/etc/Makeconf:151: warning: overriding commands for
> target `.c.o'
> C:/Programme/R/R-2.10.1/etc/Makeconf:142: warning: ignoring old commands for
> target `.c.o'
> C:/Programme/R/R-2.10.1/etc/Makeconf:159: warning: overriding commands for
> target `.c.d'
> C:/Programme/R/R-2.10.1/etc/Makeconf:144: warning: ignoring old commands for
> target `.c.d'
> C:/Programme/R/R-2.10.1/etc/Makeconf:169: warning: overriding commands for
> target `.m.o'
> C:/Programme/R/R-2.10.1/etc/Makeconf:162: warning: ignoring old commands for
> target `.m.o'
> g++ -I"C:/Programme/R/R-2.10.1/include"-O2 -Wall  -c
> OrthoFunctions.c -o OrthoFunctions.o
> gcc -shared -s -o OrthoFunctions.dll tmp.def OrthoFunctions.o
> -LC:/Programme/R/R-2.10.1/bin -lR
> OrthoFunctions.o:OrthoFunctions.c:(.eh_frame+0x11): undefined reference to
> `__gxx_personality_v0'
> collect2: ld returned 1 exit status
> 
> 
> 
> I have a vague idea of the issue of calling conventions and was hoping that
> the __cdecl
> specifier would force the appropriate convention.
> I also have Cygwin installed as part of the Python(x,y) distribution but I
> am assuming that
> R CMD SHLIB source.c
> calls the right compiler.
> 
> What could the problem be?
> 
> Many thanks,
> 
> 
> Michael
> 
>[[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] How to debug: Cons memory exhausted

2010-06-24 Thread Simon Urbanek

On Jun 20, 2010, at 11:15 AM, Saptarshi Guha wrote:

> Hello,
> I get an error when binary structures from a pipe
> 
> 'Error: cons memory exhausted (limit reached?)' (but R does not crash)
> 
> This is probably due to some bug in my code, but occurs after reading
> about 85K pairs of RAWSXP objects (each < 20 bytes).
> 
> I do not have any explicit calls to malloc/calloc
> 

It has nothing to do with malloc/calloc - it means that the number of nodes (=R 
objects) has reached the limit. Normally the limit is soft so it will adjust up 
to SIZE_MAX (for all practical purposes unlimited on 64-bit machines). However, 
in low-memory conditions it may not be possible to allocate more heap space so 
that is more likely your limit.


> I'm going through my code and have inserted the (brute force) printf
> statements
> my question is: Does anybody have an advice on debugging this particular
> error?
> 

You can set a breakpoint on mem_err_cons. In general it means that you are 
simply running out of memory.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] accessing underlying code

2010-07-06 Thread Simon Urbanek

On Jul 6, 2010, at 11:04 AM, Hodgess, Erin wrote:

> Dear R Developers:
> 
> Is there a way to look at the underlying code from such items as 
> R_setup_starma, please?
> 

Yes, of course, they are in the R sources (src/library/stats/src/pacf.c) - 
that's the beauty of open source.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Strange R object

2010-07-09 Thread Simon Urbanek

On Jul 9, 2010, at 12:41 PM, Deepayan Sarkar wrote:

> On Fri, Jul 9, 2010 at 5:25 AM, Peter Dalgaard  wrote:
>> Gabor Grothendieck wrote:
>>> On Fri, Jul 9, 2010 at 5:09 AM, Peter Dalgaard  wrote:
 Gabor Grothendieck wrote:
> I have *** attached *** an RData file containing an R object that
> is acting strangely.
> 
> Try this in a fresh workspace. Do not load zoo or any other package.
> We load the object, zz2, from the attached RData file.  It is just
> the number 1 with the class c("zooreg", "zoo").
> 
> Now create an S3 print routine that simply prints an X when given
> an object of class "zoo".
> 
> If we use print on the object it produces an X but not if we just
> enter it at the console.  Also the object is not identical to its
> dput output.
> 
> How can such an object exist?  What is it about the object that is
> different from structure(1, class = c("zoo", "zooreg")) ?
> 
 There's a bit in the SEXP structure that is supposed to be turned on
 when an object has an S3 class. This is where implicit print looks,
 whereas explicit print looks, er, elsewhere. Notice that
 
> is.object(zz2)
 [1] FALSE
> class(zz2) <- class(zz2)
> zz2
 X
> is.object(zz2)
 [1] TRUE
 
 Whenever the same information is stored in two ways, there is a risk of
 inconsistency, so it is not too strange that you can have an ill-formed
 .Rdata file (if you save zz2 back out, after the above fixup, line 11
 changes from 526 to 782, corresponding to the bit being turned on).
 
 I don't think it is the job of load() to verify object structures, since
 there is no end to that task. Rather, we shouldn't create them in the
 first place, but you give us no clues as to how that object got made.
 
>>> 
>>> This was originally a large object in a program that uses a variety of
>>> packages and it took quite a long time just to narrow it down to the
>>> point where I had an object sufficiently small to post.  Its not even
>>> clear at what point the object goes bad but your class(x) <- class(x)
>>> trick helped a lot and I have now been able to recreate it in a simple
>>> manner.
>>> 
>>> Below we create a new S3 class "X" with an Ops.X and print.X method.
>>> We then create an object x of that class which is just 1 with a class
>>> of "X".  When we multiply 1*x we get the bad object.  1*x and x have
>>> the same dput output but compare as FALSE.  1*x is not printed by
>>> print.X even though it is of class "X" while x is printed by print.X .
>>>  If we assign 1*x to xx and use your class assignment trick (class(xx)
>>> <- class(xx)) then xx prints as expected even though it did not prior
>>> to the class assignment.
>>> 
 Ops.X <- function(e1, e2) { print("Ops.X"); NextMethod(.Generic) }
 print.X <- function(x, ...) print("print.X")
 x <- structure(1, class = "X")
 dput(x)
>>> structure(1, class = "X")
 dput(1*x)
>>> [1] "Ops.X"
>>> structure(1, class = "X")
 identical(x, 1*x)
>>> [1] "Ops.X"
>>> [1] FALSE
 1*x
>>> [1] "Ops.X"
>>> [1] 1
>>> attr(,"class")
>>> [1] "X"
 x
>>> [1] "print.X"
 xx <- 1*x
>>> [1] "Ops.X"
 class(xx) <- class(xx)
 xx
>>> [1] "print.X"
>> 
>> Or, to minimize it further:
>> 
>>> x <- structure(1, class="y")
>>> is.object(x)
>> [1] TRUE
>>> is.object(x*1)
>> [1] TRUE
>>> is.object(1*x)
>> [1] FALSE
>>> class(x*1)
>> [1] "y"
>>> class(1*x)
>> [1] "y"
>> 
>> Yup, that looks like a bug.
> 
> I recently came across the following surprising behaviour which turns
> out to be the same issue. I had been meaning to ask for an
> explanation.
> 
>> x <- 1:20
>> class(x)
> [1] "integer"
>> is.object(x)
> [1] FALSE
>> print.integer <- function(x) print(x %% 5)
>> print(x)
> [1] 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0
>> x
> [1]  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20
> 


... that is an entirely different issue. x is still not an object because it 
doesn't have any explicit S3 class so it has nothing in common with the case 
discussed. This is about P in REPL which uses PrintValueEnv which is turn 
dispatches to print() only for objects (see main and print).

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-2.11.1 build and 'so' libraries withouth the 'lib' prefix

2010-07-12 Thread Simon Urbanek

On Jul 12, 2010, at 5:29 AM, lI wrote:

> Greetings,
> 
> I have a computer with the following setup:
> 1)cblfs (pure 64-bit (amd64) linux), kernel2.6.34 gcc4.4.2 
> 2)R-2.11.1
> 
> I compiled R with BLAS and lapack  using the switched  ( --with-blas="-
> lpthread -latlas  -lfptf77blas" 
> --with-lapack="-llapack -lcblas"  ). 
> (( http://cran.r-project.org/doc/manuals/R-admin.html#Linear-algebra) )
> 
> Prior to compiling R-2.11.1
> sh conigure --help   gave  options including  the following
> SHLIB_LD command for linking shared libraries 
>which contain object files  from a C or Fortran compiler only  
> 
> SHLIB_LDFLAGS   special flags used by SHLIB_LD
> SHLIB_CXXLDFLAGS   special flags used by SHLIB_CXXLD  
>  
> SHLIB_FCD   command for linking shared libraries which contain object files 
>  from the Fortran 95 compiler
> SHLIB_FCLDFLAGS  special flags used by SHLIB_FCLD 
> 
> 
> I did not know what to set for these   and accepted whatever the defaults 
> were.  I ended up with shared-libraries which are  as  follows:-
> 
> $R_HOME/lib/{libRblas.so,libRlapack.so  }  i.e. with   the prefix 'lib' 
> and the following shared-libraries  without the 'lib' prefix.
> 

Those are not shared libraries - they are shared objects loaded by R 
dynamically.

Cheers,
Simon



> $R_HOME/modules/{R_X11.so,internet.so,lapack.so,vfonts.so }
> $R_HOME/library/cluster/libs/cluster.so
> $R_HOME/library/foreign/libs/foreign.so
> $R_HOME/library/grDevices/libs/grDevices.so
> $R_HOME/library/grid/libs/grid.so
> $R_HOME/library/KernSmooth/libs/KernSmooth.so
> $R_HOME/library/lattice/libs/lattice.so
> $R_HOME/library/MASS/libs/MASS.so
> $R_HOME/library/Matrix/libs/Matrix.so
> $R_HOME/library/methods/libs/methods.so
> $R_HOME/library/mgcv/libs/mgcv.so
> $R_HOME/library/nlme/libs/nlme.so
> $R_HOME/library/nnet/libs/nnet.so
> $R_HOME/library/rpart/libs/rpart.so
> $R_HOME/library/spatial/libs/spatial.so
> $R_HOME/library/splines/libs/splines.so
> $R_HOME/library/stats/libs/stats.so
> $R_HOME/library/survival/libs/survival.so
> $R_HOME/library/tools/libs/tools.so
> 
> 
> In linux builds   the linker usually looks for libs with the 'lib'  prefix.  
> In 
> this installation  all the libraries   
> ---in $R_HOME/modules
> ---in $R_HOME/library/patha/to/whatevr
> 
> do not have the   'lib' prefix.  
> 
> QUESTION:
> A) does any on list know   SHLIB_LD   SHLIB_LDFLAGS  SHLIB_CXXLDFLAGS
> SHLIB_FCD   SHLIB_FCLDFLAGSsettings for compiling R  and do these result 
> in  so libs with the 'lib' prefix?
> B) If all of A) is negative what is there to be done toto enable 
> generation   libraries in $R_HOME/modules $R_HOME/library/~  with the 'lib' 
> prefix?
> 
> 
> suggestions welcomed.
> 
> luxInteg
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] R-2.11.1 build and 'so' libraries withouth the 'lib' prefix//update

2010-07-12 Thread Simon Urbanek

On Jul 12, 2010, at 2:50 PM, lI wrote:

> On Monday 12 July 2010 11:07:23 lI wrote:
>> On Monday 12 July 2010 10:29:30 lI wrote:
>>> Greetings,
>>> 
>>> I have a computer with the following setup:
>>> 1)cblfs (pure 64-bit (amd64) linux), kernel2.6.34 gcc4.4.2
>>> 2)R-2.11.1
>>> 
>>> I compiled R with BLAS and lapack  using the switched  ( --with-blas="-
>>> lpthread -latlas  -lfptf77blas"
>>> --with-lapack="-llapack -lcblas"  ).
>>> (( http://cran.r-project.org/doc/manuals/R-admin.html#Linear-algebra) )
>>> 
>>> Prior to compiling R-2.11.1
>>> sh conigure --help   gave  options including  the following
>>> SHLIB_LD command for linking shared libraries
>>>which contain object files  from a C or Fortran compiler
>>> only SHLIB_LDFLAGS   special flags used by SHLIB_LD
>>> SHLIB_CXXLDFLAGS   special flags used by SHLIB_CXXLD
>>> SHLIB_FCD   command for linking shared libraries which contain object
>>> files from the Fortran 95 compiler
>>> SHLIB_FCLDFLAGS  special flags used by SHLIB_FCLD
>>> 
>>> 
>>> I did not know what to set for these   and accepted whatever the defaults
>>> were.  I ended up with shared-libraries which are  as  follows:-
>>> 
>>> $R_HOME/lib/{libRblas.so,libRlapack.so  }  i.e. with   the prefix 'lib'
>>> and the following shared-libraries  without the 'lib' prefix.
>>> 
>>> $R_HOME/modules/{R_X11.so,internet.so,lapack.so,vfonts.so }
>>> $R_HOME/library/cluster/libs/cluster.so
>>> $R_HOME/library/foreign/libs/foreign.so
>>> $R_HOME/library/grDevices/libs/grDevices.so
>>> $R_HOME/library/grid/libs/grid.so
>>> $R_HOME/library/KernSmooth/libs/KernSmooth.so
>>> $R_HOME/library/lattice/libs/lattice.so
>>> $R_HOME/library/MASS/libs/MASS.so
>>> $R_HOME/library/Matrix/libs/Matrix.so
>>> $R_HOME/library/methods/libs/methods.so
>>> $R_HOME/library/mgcv/libs/mgcv.so
>>> $R_HOME/library/nlme/libs/nlme.so
>>> $R_HOME/library/nnet/libs/nnet.so
>>> $R_HOME/library/rpart/libs/rpart.so
>>> $R_HOME/library/spatial/libs/spatial.so
>>> $R_HOME/library/splines/libs/splines.so
>>> $R_HOME/library/stats/libs/stats.so
>>> $R_HOME/library/survival/libs/survival.so
>>> $R_HOME/library/tools/libs/tools.so
>>> 
>>> 
>>> In linux builds   the linker usually looks for libs with the 'lib' 
>>> prefix. In this installation  all the libraries
>>> ---in $R_HOME/modules
>>> ---in $R_HOME/library/patha/to/whatevr
>>> 
>>> do not have the   'lib' prefix.
>>> 
>>> QUESTION:
>>> A) does any on list know   SHLIB_LD   SHLIB_LDFLAGS  SHLIB_CXXLDFLAGS
>>> SHLIB_FCD   SHLIB_FCLDFLAGSsettings for compiling R  and do these
>>> result in  so libs with the 'lib' prefix?
>>> B) If all of A) is negative what is there to be done toto enable
>>> generation   libraries in $R_HOME/modules $R_HOME/library/~  with the
>>> 'lib' prefix?
>> 
>> I forgot to add my configure/make  options.  These were:-
>> 
>> export BUILD64="-m64"
>> sh configure \
>> CC="gcc $BUILD64" \
>> CXX="g++ $BUILD64" \
>> F77="gfortran  $BUILD64" \
>> FC="gfortran  $BUILD64" \
>> JAVA_HOME=$JAVA_HOME \
>> LIBnn=lib64 \
>> CPPFLAGS="-I$ATLAS_HOME/include -I/opt/acml4.4.0/gfortran64_mp/include  -
>> I/opt/acml4.4.0/gfortran64/include -I/usr/local/numerics/include" \
>> LDFLAGS="-L$ATLAS_HOME/lib  -L/opt/acml4.4.0/gfortran64_mp/lib -
>> L/usr/local/numerics/lib -L/usr/lib" \
>> --prefix=/opt/TEST/R-2.11.1 \
>> --x-includes=$XORG_PREFIX/include \
>> --x-libraries=$XORG_PREFIX/lib \
>> --with-tcl-config=/usr/lib  \
>> --with-tk-config=/usr/lib \
>> --with-system-zlib=/usr  \
>> --with-system-bzlib=/usr \
>> --with-system-pcre=/usr  \
>> --with-x  \
>> --with-libpth-prefix=/usr \
>> --with-libintl-prefix=/usr \
>> --with-blas="-lpthread -latlas  -lfptf77blas" \
>> --with-lapack="-llapack -lcblas" \
>> --enable-R-shlib \
>> --enable-BLAS-shlib
>> 
>> 
>> The sources  compiled trouble-free.   with  'make'  as normal user.
>> I then ran  'make install' as  super-user  and int installed in the prefix
>> as set..
>> 
>> A reply to the following questions would be much appreciateed:-
>> QUESTION:
>> A) does any on list know   SHLIB_LD   SHLIB_LDFLAGS  SHLIB_CXXLDFLAGS
>> SHLIB_FCD   SHLIB_FCLDFLAGSsettings for compiling R  and do these
>>  result in  so libs with the 'lib' prefix?
>> B) If all of A) is negative what is there to be done to   enable
>> generation  of  shared-libraries in $R_HOME/modules/ and 
>> $R_HOME/library/~ directories/sub-directories  with the 'lib'prefix?
>> 
>> 
> thanks everyone for the help in clarifying what were share-libraries  and 
> otherwise.
> 
> As stated above I used two switches
> --enable-R-shlib \
> --enable-BLAS-shlib
> in the   configure options of the downloaded R-2.11.1 source  prior to 
> compilation.
> 
> I am compiling  R-2.11.1   as  an optional dependency of kdeedu-4.4.5.
> This 
> asks for R-shlib   )which I noticed it is disable by the default  in the 
> configure script.)
> 
> After compile and install of R-2.11.1,  I am only able to start R from 

Re: [Rd] R-2.11.1 build and 'so' libraries withouth the 'lib' prefix//update

2010-07-12 Thread Simon Urbanek

On Jul 12, 2010, at 5:59 PM, lI wrote:

> On Monday 12 July 2010 20:52:15 Simon Urbanek wrote:
>> On Jul 12, 2010, at 2:50 PM, lI wrote:
>>> On Monday 12 July 2010 11:07:23 lI wrote:
>>>> On Monday 12 July 2010 10:29:30 lI wrote:
>>>>> Greetings,
>>>>> 
>>>>> I have a computer with the following setup:
>>>>> 1)cblfs (pure 64-bit (amd64) linux), kernel2.6.34 gcc4.4.2
>>>>> 2)R-2.11.1
>>>>> 
>>>>> I compiled R with BLAS and lapack  using the switched  ( --with-blas="-
>>>>> lpthread -latlas  -lfptf77blas"
>>>>> --with-lapack="-llapack -lcblas"  ).
>>>>> (( http://cran.r-project.org/doc/manuals/R-admin.html#Linear-algebra) )
>>>>> 
>>>>> Prior to compiling R-2.11.1
>>>>> sh conigure --help   gave  options including  the following
>>>>> SHLIB_LD command for linking shared libraries
>>>>>   which contain object files  from a C or Fortran compiler
>>>>> only SHLIB_LDFLAGS   special flags used by SHLIB_LD
>>>>> SHLIB_CXXLDFLAGS   special flags used by SHLIB_CXXLD
>>>>> SHLIB_FCD   command for linking shared libraries which contain object
>>>>> files from the Fortran 95 compiler
>>>>> SHLIB_FCLDFLAGS  special flags used by SHLIB_FCLD
>>>>> 
>>>>> 
>>>>> I did not know what to set for these   and accepted whatever the
>>>>> defaults were.  I ended up with shared-libraries which are  as 
>>>>> follows:-
>>>>> 
>>>>> $R_HOME/lib/{libRblas.so,libRlapack.so  }  i.e. with   the prefix 'lib'
>>>>> and the following shared-libraries  without the 'lib' prefix.
>>>>> 
>>>>> $R_HOME/modules/{R_X11.so,internet.so,lapack.so,vfonts.so }
>>>>> $R_HOME/library/cluster/libs/cluster.so
>>>>> $R_HOME/library/foreign/libs/foreign.so
>>>>> $R_HOME/library/grDevices/libs/grDevices.so
>>>>> $R_HOME/library/grid/libs/grid.so
>>>>> $R_HOME/library/KernSmooth/libs/KernSmooth.so
>>>>> $R_HOME/library/lattice/libs/lattice.so
>>>>> $R_HOME/library/MASS/libs/MASS.so
>>>>> $R_HOME/library/Matrix/libs/Matrix.so
>>>>> $R_HOME/library/methods/libs/methods.so
>>>>> $R_HOME/library/mgcv/libs/mgcv.so
>>>>> $R_HOME/library/nlme/libs/nlme.so
>>>>> $R_HOME/library/nnet/libs/nnet.so
>>>>> $R_HOME/library/rpart/libs/rpart.so
>>>>> $R_HOME/library/spatial/libs/spatial.so
>>>>> $R_HOME/library/splines/libs/splines.so
>>>>> $R_HOME/library/stats/libs/stats.so
>>>>> $R_HOME/library/survival/libs/survival.so
>>>>> $R_HOME/library/tools/libs/tools.so
>>>>> 
>>>>> 
>>>>> In linux builds   the linker usually looks for libs with the 'lib'
>>>>> prefix. In this installation  all the libraries
>>>>> ---in $R_HOME/modules
>>>>> ---in $R_HOME/library/patha/to/whatevr
>>>>> 
>>>>> do not have the   'lib' prefix.
>>>>> 
>>>>> QUESTION:
>>>>> A) does any on list know   SHLIB_LD   SHLIB_LDFLAGS  SHLIB_CXXLDFLAGS
>>>>> SHLIB_FCD   SHLIB_FCLDFLAGSsettings for compiling R  and do these
>>>>> result in  so libs with the 'lib' prefix?
>>>>> B) If all of A) is negative what is there to be done toto enable
>>>>> generation   libraries in $R_HOME/modules $R_HOME/library/~  with the
>>>>> 'lib' prefix?
>>>> 
>>>> I forgot to add my configure/make  options.  These were:-
>>>> 
>>>> export BUILD64="-m64"
>>>> sh configure \
>>>> CC="gcc $BUILD64" \
>>>> CXX="g++ $BUILD64" \
>>>> F77="gfortran  $BUILD64" \
>>>> FC="gfortran  $BUILD64" \
>>>> JAVA_HOME=$JAVA_HOME \
>>>> LIBnn=lib64 \
>>>> CPPFLAGS="-I$ATLAS_HOME/include -I/opt/acml4.4.0/gfortran64_mp/include 
>>>> - I/opt/acml4.4.0/gfortran64/include -I/usr/local/numerics/include" \
>>>> LDFLAGS="-L$ATLAS_HOME/lib  -L/opt/acml4.4.0/gfortran64_mp/lib -
>>>> L/usr/local/numerics/lib -L/usr/lib" \
>>>> --prefix=/opt/TEST/R-2.11.1 \
>>>> --x-includes=$XORG_PREFI

Re: [Rd] Precompiled vignette on CRAN

2010-07-14 Thread Simon Urbanek
On Jul 14, 2010, at 4:04 PM, Prof Brian Ripley wrote:

> On Wed, 14 Jul 2010, Felix Schönbrodt wrote:
> 
>> Hello,
>> 
>> my package passes R CMD check without any warnings on my local machine (Mac 
>> OS), as well as on Uwe Ligges' Winbuilder. On RForge, however, we sometimes 
>> run into problems building the Sweave vignettes.
> 
> Just 'problems' is not helpful.
> 

FWIW I think it was a red herring - after bugfixes to the package and 
installing (unstated*) dependencies it works (possibly use of require instead 
of library in the vignette might help - I don't know what the best practice is).


>> Now here's my question: is it necessary for a CRAN submission that the 
>> Sweave vignettes can be compiled on CRAN, or is it possible to provide the 
>> (locally compiled) pdf vignette to be included in the package?
> 
> This really is a question to ask the CRAN gatekeepers, but people are on 
> vacation right now, so I've give some indication of my understanding.
> 
> What does 'compiled' mean here?  (Run through LaTeX?  Run the R code?) There 
> are examples on CRAN of packages which cannot re-make their vignettes without 
> external files (e.g. LaTeX style files), or take hours (literally) to run the 
> code.  The source package should contain the PDF versions of the vignettes as 
> made by the author.
> 
> There is relevant advice in 'Writing R Extensions'.
> 
> What the people who do the CRAN package checks do get unhappy about are 
> packages which fail running the R code in their vignettes, since this often 
> indicates a problem in the package which is not exercised by the examples nor 
> tests.  This gives a warning, as you will see in quite a few CRAN package 
> checks.
> 

In the case of TripleR R CMD check failed (i.e. error code != 0) because of the 
vignette.

Cheers,
Simon

* - which reminds me -- what is the correct place to list vignette 
dependencies? "Suggests:" ?

R CMD check simply fails in the vignette build when vignetts use unstated 
dependencies [via library()] - there is no explicit warning/error by check 
itself.

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] bug in identical()? [Was: [R-sig-ME] Failure to load lme4 on Mac]

2010-07-17 Thread Simon Urbanek
Daniel,

thanks for the test case. I did run it in valgrind but nothing showed up, 
however ... 

I'm starting to have a suspicion that this has something to do with identical() 
- look at this:

> identical(M1,M2)
[1] FALSE
> all(serialize(M1,NULL)==serialize(M2,NULL))
[1] TRUE
> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M2,NULL)))
[1] FALSE
> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M1,NULL)))
[1] FALSE

So I think this may be a bug in identical() mainly because of the last one. 
I'll need to take identical() apart to see where it fails ... I'm CCing this to 
R-devel as the current issue seems more like an R issue so more eyes can have a 
look ...

Cheers,
Simon


[FWIW this is tested in today's R-devel (with valgrind level 2) on x86_64 OS X 
10.6.4 with lme4 from CRAN and Matrix form R-devel Recommended]


On Jul 17, 2010, at 4:50 AM, Daniel Myall wrote:

> I've done some further testing (R 2.11.1) and the issue is not limited to 
> Leopard.
> 
> Using the test:
> 
> library(lme4)
> y <- (1:20)*pi
> x <- (1:20)^2
> group <- gl(2,10)
> for (i in 1:10) {
>  M1 <- lmer (y ~ x + (x | group))
>  M2 <- lmer (y ~ x + (x | group))
>  print(identical(M1,M2))
> }
> 
> For CRAN lme4 and Matrix:
> 
> 32 bit on Leopard: R CMD check fails; different results (on most runs)
> 32 bit on Snow Leopard: R CMD check passes; different results (on some runs).
> 64 bit on Snow Leopard: R CMD check passes; identical results
> 
> For SVN version of Matrix with CRAN lme4:
> 
> 32 bit on Snow Leopard: different results (on all runs).
> 64 bit on Snow Leopard: different results (on all runs)
> 
> For SVN version of Matrix with SVN lme4a:
> 
> 32 bit on Snow Leopard: different results (on all runs).
> 64 bit on Snow Leopard: identical results
> 
> I couldn't reproduce on Linux 32/64bit. Is it time to jump into valgrind to 
> try and find the cause?
> 
> Cheers,
> Daniel
> 
> 
> 
> On 17/07/10 5:51 PM, John Maindonald wrote:
>> In principle, maybe a Snow Leopard version might be posted
>> as an alternative, if someone can provide one.  But I take it
>> that the issue is now a bit wider than tests that fail on Leopard
>> vs passing on Snow Leopard?
>>   
> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] bug in identical()? [Was: [R-sig-ME] Failure to load lme4 on Mac]

2010-07-17 Thread Simon Urbanek
Ok, I think I found the issue. I'm not sure why this varies by platform but the 
mismatch is due to the @env slot. Two environments are only identical if it is 
*the* same environment (i.e. the same pointer). However, M1 and M2 have 
different environments. The content of those environments is identical, but 
that is irrelevant as it's not the same pointer. Hence identical(M1, M2) fails 
(and serialize comparison succeeds as it cares only about the content).

So the short story is don't use identical() to compare the models (unless you 
remove @env first). The long story raises the question whether identical() 
should really return FALSE for environments like
> identical(new.env(),new.env())
[1] FALSE
I can see arguments both ways but for the purposes of comparing values there 
should be an option that the above is TRUE.

To be honest I don't see why this has not shown up on other platforms as that 
is a global issue... (I hope this is the full story - I didn't try all the 
combinations to see if setting @env to the same environment will appease 
identical() for all the models)

Cheers,
Simon


On Jul 17, 2010, at 3:49 PM, Simon Urbanek wrote:

> Daniel,
> 
> thanks for the test case. I did run it in valgrind but nothing showed up, 
> however ... 
> 
> I'm starting to have a suspicion that this has something to do with 
> identical() - look at this:
> 
>> identical(M1,M2)
> [1] FALSE
>> all(serialize(M1,NULL)==serialize(M2,NULL))
> [1] TRUE
>> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M2,NULL)))
> [1] FALSE
>> identical(unserialize(serialize(M1,NULL)),unserialize(serialize(M1,NULL)))
> [1] FALSE
> 
> So I think this may be a bug in identical() mainly because of the last one. 
> I'll need to take identical() apart to see where it fails ... I'm CCing this 
> to R-devel as the current issue seems more like an R issue so more eyes can 
> have a look ...
> 
> Cheers,
> Simon
> 
> 
> [FWIW this is tested in today's R-devel (with valgrind level 2) on x86_64 OS 
> X 10.6.4 with lme4 from CRAN and Matrix form R-devel Recommended]
> 
> 
> On Jul 17, 2010, at 4:50 AM, Daniel Myall wrote:
> 
>> I've done some further testing (R 2.11.1) and the issue is not limited to 
>> Leopard.
>> 
>> Using the test:
>> 
>> library(lme4)
>> y <- (1:20)*pi
>> x <- (1:20)^2
>> group <- gl(2,10)
>> for (i in 1:10) {
>> M1 <- lmer (y ~ x + (x | group))
>> M2 <- lmer (y ~ x + (x | group))
>> print(identical(M1,M2))
>> }
>> 
>> For CRAN lme4 and Matrix:
>> 
>> 32 bit on Leopard: R CMD check fails; different results (on most runs)
>> 32 bit on Snow Leopard: R CMD check passes; different results (on some runs).
>> 64 bit on Snow Leopard: R CMD check passes; identical results
>> 
>> For SVN version of Matrix with CRAN lme4:
>> 
>> 32 bit on Snow Leopard: different results (on all runs).
>> 64 bit on Snow Leopard: different results (on all runs)
>> 
>> For SVN version of Matrix with SVN lme4a:
>> 
>> 32 bit on Snow Leopard: different results (on all runs).
>> 64 bit on Snow Leopard: identical results
>> 
>> I couldn't reproduce on Linux 32/64bit. Is it time to jump into valgrind to 
>> try and find the cause?
>> 
>> Cheers,
>> Daniel
>> 
>> 
>> 
>> On 17/07/10 5:51 PM, John Maindonald wrote:
>>> In principle, maybe a Snow Leopard version might be posted
>>> as an alternative, if someone can provide one.  But I take it
>>> that the issue is now a bit wider than tests that fail on Leopard
>>> vs passing on Snow Leopard?
>>> 
>> 
>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


  1   2   3   4   5   6   7   8   9   10   >