[Rd] [patch] giving library() a 'version' argument
I've made a small enhancement to R that would help developers better control what versions of code we're using where. Basically, to load a package in R, one currently does: library(whateverPackage) and with the enhancement, you can ensure that you're getting at least version X of the package: library(whateverPackage, version=3.14) Reasons one might want this include: * you know that in version X some bug was fixed * you know that in version X some feature was added * that's the first version you've actually tested it with & you don't want to vouch for earlier versions without testing * you develop on one machine & deploy on another machine you don't control, and you want runtime checks that the sysadmin installed what they were supposed to install In general, I have an interest in helping R get better at various things that would help it play in a "production environment", for various values of that term. =) The attached patch is made against revision 58980 of https://svn.r-project.org/R/trunk . I think this is the first patch I've submitted to the R core, so please let me know if anything's amiss, or of course if there are reservations about the approach. Thanks. -- Ken Williams, Senior Research Scientist WindLogics http://windlogics.com CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution of any kind is strictly prohibited. If you are not the intended recipient, please contact the sender via reply e-mail and destroy all copies of the original message. Thank you. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Vignette questions
Context: R2.15-0 on Ubuntu. 1. I get a WARNING from CMD check for "Package vignette(s) without corresponding PDF: In this case the vignettes directory had both the pdf and Rnw; do I need to move the pdf to inst/doc? I'm reluctant to add the pdf to the svn source on Rforge, per the usual rule that a code management system should not have both a primary source and a object dervived from it under version control. However, if this is the suggested norm I could do so. 2. Close reading of the paragraph about vignette sources shows the following -- I think? If I have a vignette that should not be rebuilt by "check" or "BUILD" I should put the .Rnw source and pdf in /inst/doc, and have the others that should be rebuilt in /vignettes. This would include any that use "private R packages, screen snapshots, ...", or in my case one that takes just a little short of forever to run. 3. Do these unprocessed package also contribute to the index via \VignetteIndexEntry lines, or will I need to create a custom index? Terry Therneau __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Byte compilation of packages on CRAN
In DESCRIPTION if I set LazyLoad to 'yes' will data.table (for example) then be byte compiled for users who install the binary package from CRAN on Windows? This question is based on reading section 1.2 of this document : http://www.divms.uiowa.edu/~luke/R/compiler/compiler.pdf I've searched r-devel and Stack Overflow history and have found questions and answers relating to R CMD INSTALL and install.packages() from source, but no answer (as yet) about why binary packages for Windows appear not to be byte compiled. If so, is there any reason why all packages should not set LazyLoad to 'yes'. And if not, could LazyLoad be 'yes' by default? Thanks, Matthew __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Vignette questions
For 1, you should run R CMD check on the tar ball (pkg_x.x.tar.gz) from R CMD build instead of the source directory. R CMD build will build the PDF vignette into the tar ball. For 2, I have been confused by ./vignettes vs ./inst/doc since ./vignettes was introduced. I might be able to figure it out by try-and-err but I never tried, and I'm still sticking to ./inst/doc. At least you can exclude the Rnw source in .Rbuildignore so that R can only stare at your PDF documents and sigh. For 3, I remember some of us requested that R could also respect entries of non-Sweave vignettes (like the ones in ./demo/00Index), but this is not possible as far as I know. However, I can tell you a dark voodoo seems to be still working: you can write your own index.html under ./inst/doc with your own links to vignettes. Regards, Yihui -- Yihui Xie Phone: 515-294-2465 Web: http://yihui.name Department of Statistics, Iowa State University 2215 Snedecor Hall, Ames, IA On Wed, Apr 11, 2012 at 3:41 PM, Terry Therneau wrote: > Context: R2.15-0 on Ubuntu. > > 1. I get a WARNING from CMD check for "Package vignette(s) without > corresponding PDF: > In this case the vignettes directory had both the pdf and Rnw; do I need to > move the pdf to inst/doc? > > I'm reluctant to add the pdf to the svn source on Rforge, per the usual > rule that a code management system should not have both a primary source and > a object dervived from it under version control. However, if this is the > suggested norm I could do so. > > 2. Close reading of the paragraph about vignette sources shows the following > -- I think? If I have a vignette that should not be rebuilt by "check" or > "BUILD" I should put the .Rnw source and pdf in /inst/doc, and have the > others that should be rebuilt in /vignettes. This would include any that > use "private R packages, screen snapshots, ...", or in my case one that > takes just a little short of forever to run. > > 3. Do these unprocessed package also contribute to the index via > \VignetteIndexEntry lines, or will I need to create a custom index? > > Terry Therneau > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Vignette questions
Very quick & short answer: I made the transition to ./vignettes for hyperSpec (you can look at the source at r-forge) - it was a mess. It is almost working now (compression is missing, I'll have to figure out how to invoke ghostscript in an OS idependent way, qpdf doesn't give me the compression rates). I have .pdf is under version control only for those vignettes that I build externally (with the fake .Rnws that produce the proper indexing as vignette). They are all (regardless whether BUILD and check affect them or not) in ./vignettes. I think that strategy can at least partially solve the issue with non-Sweave vignettes as well as long as the result is a pdf. You may need to play with ./vignettes/.install_extras if additional, non pdf and non Rnw files are needed. Best, Claudia Von: r-devel-boun...@r-project.org [r-devel-boun...@r-project.org] im Auftrag von Yihui Xie [x...@yihui.name] Gesendet: Mittwoch, 11. April 2012 23:28 An: Terry Therneau Cc: r-devel@r-project.org Betreff: Re: [Rd] Vignette questions For 1, you should run R CMD check on the tar ball (pkg_x.x.tar.gz) from R CMD build instead of the source directory. R CMD build will build the PDF vignette into the tar ball. For 2, I have been confused by ./vignettes vs ./inst/doc since ./vignettes was introduced. I might be able to figure it out by try-and-err but I never tried, and I'm still sticking to ./inst/doc. At least you can exclude the Rnw source in .Rbuildignore so that R can only stare at your PDF documents and sigh. For 3, I remember some of us requested that R could also respect entries of non-Sweave vignettes (like the ones in ./demo/00Index), but this is not possible as far as I know. However, I can tell you a dark voodoo seems to be still working: you can write your own index.html under ./inst/doc with your own links to vignettes. Regards, Yihui -- Yihui Xie Phone: 515-294-2465 Web: http://yihui.name Department of Statistics, Iowa State University 2215 Snedecor Hall, Ames, IA On Wed, Apr 11, 2012 at 3:41 PM, Terry Therneau wrote: > Context: R2.15-0 on Ubuntu. > > 1. I get a WARNING from CMD check for "Package vignette(s) without > corresponding PDF: > In this case the vignettes directory had both the pdf and Rnw; do I need to > move the pdf to inst/doc? > > I'm reluctant to add the pdf to the svn source on Rforge, per the usual > rule that a code management system should not have both a primary source and > a object dervived from it under version control. However, if this is the > suggested norm I could do so. > > 2. Close reading of the paragraph about vignette sources shows the following > -- I think? If I have a vignette that should not be rebuilt by "check" or > "BUILD" I should put the .Rnw source and pdf in /inst/doc, and have the > others that should be rebuilt in /vignettes. This would include any that > use "private R packages, screen snapshots, ...", or in my case one that > takes just a little short of forever to run. > > 3. Do these unprocessed package also contribute to the index via > \VignetteIndexEntry lines, or will I need to create a custom index? > > Terry Therneau > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] unexpectedly high memory use in R 2.14.0
I recently started using R 2.14.0 on a new machine and i am experiencing what seems like unusually greedy memory use. It happens all the time, but to give a specific example, let's say i run the following code for(j in 1:length(files)){ load(file.path(dump.dir, files[j])) mat.data[[j]]<-data } save(abind(mat.data, along=2), file.path(dump.dir, filename)) - It loads parts of multidimensional matrix into a list, then binds it along second dimension and saves on disk. Code works, although slowly, but what's strange is the amount of memory it uses. In particular, each chunk of data is between 50M to 100M, and altogether the binded matrix is 1.3G. One would expect that R would use roughly double that memory - to keep mat.data and its binded version separately, or 1G. I could imagine that for somehow it could use 3 times the size of matrix. But in fact it uses more than 5.5 times (almost all of my physical memory) and i think is swapping a lot to disk . For this particular task, my top output shows eating more than 7G of memory and using up 11G of virtual memory as well $top PIDUSER PR NI VIRTRES SHR S %CPU %MEMTIME+ COMMAND 8823 user25 0 11g 7.2g 10m R 99.7 92.9 5:55.05 R 8590 root 15 0 154m 16m 5948 S 0.5 0.2 23:22.40 Xorg I have strong suspicion that something is off with my R binary, i don't think i experienced things like that in a long time. Is this in line with what i am supposed to experience? Are there any ideas for diagnosing what is going on? Would appreciate any suggestions Thanks Andre == Here is what i am running on: CentOS release 5.5 (Final) > sessionInfo() R version 2.14.0 (2011-10-31) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] en_US.UTF-8 attached base packages: [1] stats graphics grDevices datasets utils methods base other attached packages: [1] abind_1.4-0 rJava_0.9-3 R.utils_1.12.1R.oo_1.9.3 R.methodsS3_1.2.2 loaded via a namespace (and not attached): [1] codetools_0.2-8 tcltk_2.14.0tools_2.14.0 I compiled R configure as follows /configure --prefix=/usr/local/R --enable-byte-compiled-packages=no --with-tcltk --enable-R-shlib=yes [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] unexpectedly high memory use in R 2.14.0
On Apr 12, 2012, at 00:53 , andre zege wrote: > I recently started using R 2.14.0 on a new machine and i am experiencing > what seems like unusually greedy memory use. It happens all the time, but > to give a specific example, let's say i run the following code > > > > for(j in 1:length(files)){ > load(file.path(dump.dir, files[j])) > mat.data[[j]]<-data > } > save(abind(mat.data, along=2), file.path(dump.dir, filename)) Hmm, did you preallocate mat.data? If not, you will be copying it repeatedly, and I'm not sure that this can be done by copying pointers only. Does it work better with mat.data <- lapply(files, function(name) {load(file.path(dump.dir, name); data}) ? > > - > > It loads parts of multidimensional matrix into a list, then binds it along > second dimension and saves on disk. Code works, although slowly, but what's > strange is the amount of memory it uses. > In particular, each chunk of data is between 50M to 100M, and altogether > the binded matrix is 1.3G. One would expect that R would use roughly double > that memory - to keep mat.data and its binded version separately, or 1G. I > could imagine that for somehow it could use 3 times the size of matrix. But > in fact it uses more than 5.5 times (almost all of my physical memory) and > i think is swapping a lot to disk . For this particular task, my top output > shows eating more than 7G of memory and using up 11G of virtual memory as > well > > $top > > PIDUSER PR NI VIRTRES SHR S %CPU %MEMTIME+ COMMAND > 8823 user25 0 11g 7.2g 10m R 99.7 92.9 > 5:55.05 > R > > 8590 root 15 0 154m 16m 5948 S 0.5 0.2 > 23:22.40 Xorg > > > I have strong suspicion that something is off with my R binary, i don't > think i experienced things like that in a long time. Is this in line with > what i am supposed to experience? Are there any ideas for diagnosing what > is going on? > Would appreciate any suggestions > > Thanks > Andre > > > == > > Here is what i am running on: > > > CentOS release 5.5 (Final) > > >> sessionInfo() > R version 2.14.0 (2011-10-31) > Platform: x86_64-unknown-linux-gnu (64-bit) > > locale: > [1] en_US.UTF-8 > > attached base packages: > [1] stats graphics grDevices datasets utils methods base > > other attached packages: > [1] abind_1.4-0 rJava_0.9-3 R.utils_1.12.1R.oo_1.9.3 > R.methodsS3_1.2.2 > > loaded via a namespace (and not attached): > [1] codetools_0.2-8 tcltk_2.14.0tools_2.14.0 > > > > I compiled R configure as follows > /configure --prefix=/usr/local/R --enable-byte-compiled-packages=no > --with-tcltk --enable-R-shlib=yes > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd@cbs.dk Priv: pda...@gmail.com __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Vignette questions
On 12-04-11 04:41 PM, Terry Therneau wrote: Context: R2.15-0 on Ubuntu. 1. I get a WARNING from CMD check for "Package vignette(s) without corresponding PDF: In this case the vignettes directory had both the pdf and Rnw; do I need to move the pdf to inst/doc? Yes, you need to put the pdf in the inst/doc directory if it cannot be built by R-forge and CRAN check machines, but leave the Rnw in the vignettes directory. I'm reluctant to add the pdf to the svn source on Rforge, per the usual rule that a code management system should not have both a primary source and a object dervived from it under version control. However, if this is the suggested norm I could do so. Yes, I think this is the norm if the vignette cannot be built on CRAN and R-forge, even though it does seem a bit strange. However, you do not necessarily need to update the vignette pdf in inst/doc every time you make a change to the package even though, in my opinion, the correct logic is to test remaking the vignette when you make a change to the package. You should do this testing, of course, you just do not need to put the new pdf in inst/doc and commit it to svn each time. (But you should probably do that before you build the final package to put on CRAN.) 2. Close reading of the paragraph about vignette sources shows the following -- I think? If I have a vignette that should not be rebuilt by "check" or "BUILD" I should put the .Rnw source and pdf in /inst/doc, and have the others that should be rebuilt in /vignettes. This would include any that use "private R packages, screen snapshots, ...", or in my case one that takes just a little short of forever to run. I don't think it is intended to say that, and I didn't read it that way. I think putting the Rnw in inst/doc is supported (temporarily?) for historical reasons only. If it is not in vignettes/ and is found in inst/doc/, it is treated the same way as if it were in vignettes/. You can include screen snapshots, etc, in either case. For your situation, what you probably do need to do is specify "BuildVignettes: false" in the DESCRIPTION file. This prevents the pdf for inst/doc from being generated by the the Rnw. However, it does not prevent R CMD check from checking that the R code extracted from the Rnw actually runs, and generating an error if it does not. To prevent testing of the R code, you have to appeal directly to the CRAN and R-forge maintainers, and they will put the package on a special list. You do need to give them a good reason why the code should not be tested. I think they are sympathetic with "takes forever to run" and not very sympathetic with "does not work anymore". Generally, I think they want to consider doing this only in exceptional cases, so they do not get in a situation of having lots of broken vignettes. (One should stick with journal articles for recording broken code.) 3. Do these unprocessed package also contribute to the index via \VignetteIndexEntry lines, or will I need to create a custom index? I'm not sure of the answer to this, but would be curious to know. You may need to rely on voodoo. Paul Terry Therneau __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] unexpectedly high memory use in R 2.14.0
You are quite right that my exec time would seriously go down if i pre-allocate and not even use abind, just assign into the preallocated matrix. The reason i didn't do it here is that this is a part of some utility function that doesn't know the sizes of chunks that are on disk untill it reads all of them. If i knew a way to read dimnames off disk without reading whole matrices, i could do what you are suggesting. I guess i am better off using filebacked matrices from bigmemory, where i could read dimnames off disk without reading the matrix. I need to unwrap 4 dim arrays into 2 dim arrays and wrap them back, but i guess it would be faster anyway. My question, however was not so much about speed improvement of a particular task. It was whether this memory use of 7.2g physical memory and 11g of virtual makes sense when i am building a 1.3G matrix with this code. It just seems to me that my memory goes to almost 100% physical not just on this task but on others. I wonder if there is something seriously off with my memory experience and if i should rebuild R. In term of your lapply solution, it indeed used much less memory, in fact about 25% less memory than the loop, about 4 times the size of the final object. I am still not clear if my memory use makes sense in terms of R memory model and I am frankly not clear why lapply uses less memory. (I understand why it makes less copying) On Wed, Apr 11, 2012 at 7:15 PM, peter dalgaard wrote: > > On Apr 12, 2012, at 00:53 , andre zege wrote: > > > I recently started using R 2.14.0 on a new machine and i am experiencing > > what seems like unusually greedy memory use. It happens all the time, but > > to give a specific example, let's say i run the following code > > > > > > > > for(j in 1:length(files)){ > > load(file.path(dump.dir, files[j])) > > mat.data[[j]]<-data > > } > > save(abind(mat.data, along=2), file.path(dump.dir, filename)) > > Hmm, did you preallocate mat.data? If not, you will be copying it > repeatedly, and I'm not sure that this can be done by copying pointers only. > > Does it work better with > > mat.data <- lapply(files, function(name) {load(file.path(dump.dir, name); > data}) > > ? > > > > > > - > > > > It loads parts of multidimensional matrix into a list, then binds it > along > > second dimension and saves on disk. Code works, although slowly, but > what's > > strange is the amount of memory it uses. > > In particular, each chunk of data is between 50M to 100M, and altogether > > the binded matrix is 1.3G. One would expect that R would use roughly > double > > that memory - to keep mat.data and its binded version separately, or 1G. > I > > could imagine that for somehow it could use 3 times the size of matrix. > But > > in fact it uses more than 5.5 times (almost all of my physical memory) > and > > i think is swapping a lot to disk . For this particular task, my top > output > > shows eating more than 7G of memory and using up 11G of virtual memory as > > well > > > > $top > > > > PIDUSER PR NI VIRTRES SHR S %CPU %MEMTIME+ COMMAND > > 8823 user25 0 11g 7.2g 10m R 99.7 92.9 > > 5:55.05 > > R > > > > 8590 root 15 0 154m 16m 5948 S 0.5 0.2 > > 23:22.40 Xorg > > > > > > I have strong suspicion that something is off with my R binary, i don't > > think i experienced things like that in a long time. Is this in line with > > what i am supposed to experience? Are there any ideas for diagnosing what > > is going on? > > Would appreciate any suggestions > > > > Thanks > > Andre > > > > > > == > > > > Here is what i am running on: > > > > > > CentOS release 5.5 (Final) > > > > > >> sessionInfo() > > R version 2.14.0 (2011-10-31) > > Platform: x86_64-unknown-linux-gnu (64-bit) > > > > locale: > > [1] en_US.UTF-8 > > > > attached base packages: > > [1] stats graphics grDevices datasets utils methods base > > > > other attached packages: > > [1] abind_1.4-0 rJava_0.9-3 R.utils_1.12.1R.oo_1.9.3 > > R.methodsS3_1.2.2 > > > > loaded via a namespace (and not attached): > > [1] codetools_0.2-8 tcltk_2.14.0tools_2.14.0 > > > > > > > > I compiled R configure as follows > > /configure --prefix=/usr/local/R --enable-byte-compiled-packages=no > > --with-tcltk --enable-R-shlib=yes > > > > [[alternative HTML version deleted]] > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > -- > Peter Dalgaard, Professor, > Center for Statistics, Copenhagen Business School > Solbjerg Plads 3, 2000 Frederiksberg, Denmark > Phone: (+45)38153501 > Email: pd@cbs.dk Priv: pda...@gmail.com > > > > > > > > > [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] unexpectedly high memory use in R 2.14.0
Leaving aside what's going on inside abind::abind(), maybe the following sheds some light on what's is being wasted: # Preallocate (probably doesn't make a difference because it's a list) mat.data <- vector("list", length=length(files)); for (j in 1:length(files)){ vars <- load(file.path(dump.dir, files[j])) mat.data[[j]]<-data; # Not needed anymore/remove everything loaded rm(list=vars); } data <- abind(mat.data, along=2); # Not needed anymore rm(mat.data); save(data, file.path(dump.dir, filename)) My $.02 /Henrik On Wed, Apr 11, 2012 at 3:53 PM, andre zege wrote: > I recently started using R 2.14.0 on a new machine and i am experiencing > what seems like unusually greedy memory use. It happens all the time, but > to give a specific example, let's say i run the following code > > > > for(j in 1:length(files)){ > load(file.path(dump.dir, files[j])) > mat.data[[j]]<-data > } > save(abind(mat.data, along=2), file.path(dump.dir, filename)) > > - > > It loads parts of multidimensional matrix into a list, then binds it along > second dimension and saves on disk. Code works, although slowly, but what's > strange is the amount of memory it uses. > In particular, each chunk of data is between 50M to 100M, and altogether > the binded matrix is 1.3G. One would expect that R would use roughly double > that memory - to keep mat.data and its binded version separately, or 1G. I > could imagine that for somehow it could use 3 times the size of matrix. But > in fact it uses more than 5.5 times (almost all of my physical memory) and > i think is swapping a lot to disk . For this particular task, my top output > shows eating more than 7G of memory and using up 11G of virtual memory as > well > > $top > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 8823 user 25 0 11g 7.2g 10m R 99.7 92.9 > 5:55.05 > R > > 8590 root 15 0 154m 16m 5948 S 0.5 0.2 > 23:22.40 Xorg > > > I have strong suspicion that something is off with my R binary, i don't > think i experienced things like that in a long time. Is this in line with > what i am supposed to experience? Are there any ideas for diagnosing what > is going on? > Would appreciate any suggestions > > Thanks > Andre > > > == > > Here is what i am running on: > > > CentOS release 5.5 (Final) > > >> sessionInfo() > R version 2.14.0 (2011-10-31) > Platform: x86_64-unknown-linux-gnu (64-bit) > > locale: > [1] en_US.UTF-8 > > attached base packages: > [1] stats graphics grDevices datasets utils methods base > > other attached packages: > [1] abind_1.4-0 rJava_0.9-3 R.utils_1.12.1 R.oo_1.9.3 > R.methodsS3_1.2.2 > > loaded via a namespace (and not attached): > [1] codetools_0.2-8 tcltk_2.14.0 tools_2.14.0 > > > > I compiled R configure as follows > /configure --prefix=/usr/local/R --enable-byte-compiled-packages=no > --with-tcltk --enable-R-shlib=yes > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Byte compilation of packages on CRAN
On 11/04/2012 20:36, Matthew Dowle wrote: In DESCRIPTION if I set LazyLoad to 'yes' will data.table (for example) then be byte compiled for users who install the binary package from CRAN on Windows? No. LazyLoad is distinct from byte compilation. All installed packages use lazy loading these days (for simplicity: a very few do not benefit from it as they use all their objects at startup). This question is based on reading section 1.2 of this document : http://www.divms.uiowa.edu/~luke/R/compiler/compiler.pdf I've searched r-devel and Stack Overflow history and have found questions and answers relating to R CMD INSTALL and install.packages() from source, but no answer (as yet) about why binary packages for Windows appear not to be byte compiled. If so, is there any reason why all packages should not set LazyLoad to 'yes'. And if not, could LazyLoad be 'yes' by default? I wonder why you are not reading R's own documentation. 'Writing R Extensions' says 'The `LazyData' logical field controls whether the R datasets use lazy-loading. A `LazyLoad' field was used in versions prior to 2.14.0, but now is ignored. The `ByteCompile' logical field controls if the package code is byte-compiled on installation: the default is currently not to, so this may be useful for a package known to benefit particularly from byte-compilation (which can take quite a long time and increases the installed size of the package).' Note that the majority of CRAN packages benefit very little from byte-compilation because almost all the time of their computations is spent in compiled code. And the increased size also may matter when the code is loaded into R. Thanks, Matthew __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel -- Brian D. Ripley, rip...@stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UKFax: +44 1865 272595 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [patch] giving library() a 'version' argument
Apparently the patch file got eaten. Let me try again with a .txt extension. -Ken > -Original Message- > From: Ken Williams > Sent: Wednesday, April 11, 2012 10:28 AM > To: r-devel@r-project.org > Subject: [patch] giving library() a 'version' argument > > I've made a small enhancement to R that would help developers better > control what versions of code we're using where. > [...] CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution of any kind is strictly prohibited. If you are not the intended recipient, please contact the sender via reply e-mail and destroy all copies of the original message. Thank you. Index: src/library/base/man/library.Rd === --- src/library/base/man/library.Rd (revision 58980) +++ src/library/base/man/library.Rd (working copy) @@ -21,7 +21,7 @@ character.only = FALSE, logical.return = FALSE, warn.conflicts = TRUE, quietly = FALSE, keep.source = getOption("keep.source.pkgs"), -verbose = getOption("verbose")) +verbose = getOption("verbose"), version) require(package, lib.loc = NULL, quietly = FALSE, warn.conflicts = TRUE, @@ -59,6 +59,9 @@ \item{quietly}{a logical. If \code{TRUE}, no message confirming package loading is printed, and most often, no errors/warnings are printed if package loading fails.} + \item{version}{the minimum acceptable version of the package to load. +If a lesser version is found, the package will not be loaded and an +exception will be thrown.} } \details{ \code{library(package)} and \code{require(package)} both load the @@ -189,6 +192,10 @@ search()# "splines", too detach("package:splines") +# To require a specific minimum version: +library(splines, '2.14') +detach("package:splines") + # if the package name is in a character vector, use pkg <- "splines" library(pkg, character.only = TRUE) Index: src/library/base/R/library.R === --- src/library/base/R/library.R(revision 58980) +++ src/library/base/R/library.R(working copy) @@ -32,7 +32,7 @@ function(package, help, pos = 2, lib.loc = NULL, character.only = FALSE, logical.return = FALSE, warn.conflicts = TRUE, quietly = FALSE, keep.source = getOption("keep.source.pkgs"), - verbose = getOption("verbose")) + verbose = getOption("verbose"), version) { if (!missing(keep.source)) warning("'keep.source' is deprecated and will be ignored") @@ -276,6 +276,11 @@ stop(gettextf("%s is not a valid installed package", sQuote(package)), domain = NA) pkgInfo <- readRDS(pfile) +if (!missing(version)) { +pver <- pkgInfo$DESCRIPTION["Version"] +if (compareVersion(pver, as.character(version)) < 0) +stop("Version ", version, " of '", package, "' required, but only ", pver, " is available") +} testRversion(pkgInfo, package, pkgpath) ## avoid any bootstrapping issues by these exemptions if(!package %in% c("datasets", "grDevices", "graphics", "methods", @@ -332,10 +337,18 @@ stop(gettextf("package %s does not have a NAMESPACE and should be re-installed", sQuote(package)), domain = NA) } - if (verbose && !newpackage) -warning(gettextf("package %s already present in search()", - sQuote(package)), domain = NA) +if (!newpackage) { + if (verbose) + warning(gettextf("package %s already present in search()", +sQuote(package)), domain = NA) + if (!missing(version)) { + pver <- packageVersion(package) + if (compareVersion(as.character(pver), as.character(version)) < 0) + stop("Version ", version, " of '", package,"' required, ", +"but a lesser version ", pver, " is already loaded") + } + } } else if(!missing(help)) { if(!character.only) __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Byte compilation of packages on CRAN
> On 11/04/2012 20:36, Matthew Dowle wrote: >> In DESCRIPTION if I set LazyLoad to 'yes' will data.table (for example) >> then be byte compiled for users who install the binary package from CRAN >> on Windows? > > No. LazyLoad is distinct from byte compilation. All installed packages > use lazy loading these days (for simplicity: a very few do not benefit > from it as they use all their objects at startup). > >> This question is based on reading section 1.2 of this document : >> http://www.divms.uiowa.edu/~luke/R/compiler/compiler.pdf >> I've searched r-devel and Stack Overflow history and have found >> questions and answers relating to R CMD INSTALL and install.packages() >> from source, but no answer (as yet) about why binary packages for >> Windows appear not to be byte compiled. >> If so, is there any reason why all packages should not set LazyLoad to >> 'yes'. And if not, could LazyLoad be 'yes' by default? > > I wonder why you are not reading R's own documentation. 'Writing R > Extensions' says > > 'The `LazyData' logical field controls whether the R datasets use > lazy-loading. A `LazyLoad' field was used in versions prior to 2.14.0, > but now is ignored. > > The `ByteCompile' logical field controls if the package code is > byte-compiled on installation: the default is currently not to, so this > may be useful for a package known to benefit particularly from > byte-compilation (which can take quite a long time and increases the > installed size of the package).' > Oops, somehow missed that. Thank you! > Note that the majority of CRAN packages benefit very little from > byte-compilation because almost all the time of their computations is > spent in compiled code. And the increased size also may matter when the > code is loaded into R. > >> Thanks, >> Matthew >> >> __ >> R-devel@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel > > > -- > Brian D. Ripley, rip...@stats.ox.ac.uk > Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ > University of Oxford, Tel: +44 1865 272861 (self) > 1 South Parks Road, +44 1865 272866 (PA) > Oxford OX1 3TG, UKFax: +44 1865 272595 > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel