Re: [Rd] Problems with S4 methods dispatching on `...` (aka dotsMethods)
Hi Michael, thanks again for your patch! I've tested it and I'm happy to confirm that `callNextMethod()` works with methods dispatching on `...`. However, the second issue I reported still seems to be unresolved. Consider the following toy example, where the `f()` calls differ in result depending on whether the dispatch happens on a formal argument or the `...` argument. f = function(x, ..., a = b) { b = "missing 'a'" print(a) } f() ## [1] missing 'a' f(a = 1) ## [1] 1 setGeneric("f", signature = "x") # works as the non-generic version f() ## [1] missing 'a' setGeneric("f", signature = "...") # unexpectedly fails to find 'b' f() ## Error in print(a) : object 'b' not found Any chances of fixing this? Cheers, Andrzej On Fri, Apr 21, 2017 at 11:40 AM, Andrzej Oleś wrote: > Great, thanks Michael for you quick response! > > I started off with a question on SO because I was not sure whether this > was an actual bug or I was just missing something obvious. I'm looking > forward to the patch. > > Cheers, > Andrzej > > > On Thu, Apr 20, 2017 at 10:28 PM, Michael Lawrence < > lawrence.mich...@gene.com> wrote: > >> Thanks for pointing out these issues. I have a fix that I will commit >> soon. >> >> Btw, I would never have seen the post on Stack Overflow. It's best to >> report bugs on the bugzilla. >> >> Michael >> >> On Thu, Apr 20, 2017 at 8:30 AM, Andrzej Oleś >> wrote: >> > Hi all, >> > >> > I recently encountered some unexpected behavior with S4 generics >> > dispatching on `...`, which I described in >> > http://stackoverflow.com/questions/43499203/use-callnextmeth >> od-with-dotsmethods >> > >> > TL;DR: `callNextMethod()` doesn't work in methods dispatching on `...`, >> and >> > arguments of such methods are resolved differently than the arguments of >> > methods dispatching on formal arguments. >> > >> > Could this indicate a potential problem with the implementation of the >> > `...` dispatch? >> > >> > Cheers, >> > Andrzej >> > >> > [[alternative HTML version deleted]] >> > >> > __ >> > R-devel@r-project.org mailing list >> > https://stat.ethz.ch/mailman/listinfo/r-devel >> > > [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
> Dirk Eddelbuettel > on Sun, 23 Apr 2017 09:15:18 -0500 writes: > On 21 April 2017 at 10:34, frede...@ofb.net wrote: > | Hi Mikko, > | > | I was bitten by this recently and I think some of the replies are > | missing the point. As I understand it, the problem consists of these > | elements: > | > | 1. When R starts, it creates a directory like /tmp/RtmpVIeFj4 > | > | 2. Right after R starts I can create files in this directory with no > |error > | > | 3. After some hours or days I can no longer create files in this > |directory, because it has been deleted > Nope. That is local to your system. Correct. OTOH, Mikko and Frederik have a point in my view (below). > Witness eg at my workstation: > /tmp$ ls -ltGd Rtmp* > drwx-- 3 edd 4096 Apr 21 16:12 Rtmp9K6bSN > drwx-- 3 edd 4096 Apr 21 11:48 RtmpRRbaMP > drwx-- 3 edd 4096 Apr 21 11:28 RtmpFlguFy > drwx-- 3 edd 4096 Apr 20 13:06 RtmpWJDF3U > drwx-- 3 edd 4096 Apr 18 15:58 RtmpY7ZIS1 > drwx-- 3 edd 4096 Apr 18 12:12 Rtmpzr9W0v > drwx-- 2 edd 4096 Apr 16 16:02 RtmpeD27El > drwx-- 2 edd 4096 Apr 16 15:57 Rtmp572FHk > drwx-- 3 edd 4096 Apr 13 11:08 RtmpqP0JSf > drwx-- 3 edd 4096 Apr 10 18:47 RtmpzRzyFb > drwx-- 3 edd 4096 Apr 6 15:21 RtmpQhvAUb > drwx-- 3 edd 4096 Apr 6 11:24 Rtmp2lFKPz > drwx-- 3 edd 4096 Apr 5 20:57 RtmprCeWUS > drwx-- 2 edd 4096 Apr 3 15:12 Rtmp8xviDl > drwx-- 3 edd 4096 Mar 30 16:50 Rtmp8w9n5h > drwx-- 3 edd 4096 Mar 28 11:33 RtmpjAg6iY > drwx-- 2 edd 4096 Mar 28 09:26 RtmpYHSgZG > drwx-- 2 edd 4096 Mar 27 11:21 Rtmp0gSV4e > drwx-- 2 edd 4096 Mar 27 11:21 RtmpOnneiY > drwx-- 2 edd 4096 Mar 27 11:17 RtmpIWeiTJ > drwx-- 3 edd 4096 Mar 22 08:51 RtmpJkVsSJ > drwx-- 3 edd 4096 Mar 21 10:33 Rtmp9a5KxL > /tmp$ > Clearly still there after a month. I tend to have some longer-running R > sessions in either Emacs/ESS or RStudio. > So what I wrote in my last message here *clearly* applies to you: a local > issue for which you have to take local action as R cannot know. You also > have a choice of setting variables to affect this. Thank you Dirk (and Brian). That is all true, and of course I have known about this myself "forever" as well. > | If R expected the directory to be deleted at random, and if we expect > | users to call dir.create every time they access tempdir, then why did > | R create the directory for us at the beginning of the session? That's > | just setting people up to get weird bugs, which only appear in > | difficult-to-reproduce situations (i.e. after the session has been > | open for a long time). > I disagree. R has been doing this many years, possibly two decades. Yes, R has been doing this for a long time, including all the configuration options with environment variables, and yes this is sufficient "in principle". > | I think before we dismiss this we should think about possible in-R > | solutions and why they are not feasible. Here Mikko and Frederik do have a point I think. > | Are there any packages which > | would break if a call to 'tempdir' automatically recreated this > | directory? (Or would it be too much of a performance hit to have > | 'tempdir' check and even just issue a warning when the directory is > | found not to exist?) > | Should we have a timer which periodically updates > | the modification time of tempdir()? What do other long-running > | programs do (e.g. screen, emacs)? Valid questions, in my view. Before answering, let's try to see how hard it would be to make the tempdir() function in R more versatile. As I've found it is not at all hard to add an option which checks the existence and if the directory is no longer "valid", tries to recreate it (and if it fails doing that it calls the famous R_Suicide(), as it does when R starts up and tempdir() cannot be initialized correctly). The proposed entry in NEWS is • tempdir(check=TRUE) recreates the tmpdir() if it is no longer valid. and of course the default would be status quo, i.e., check = FALSE, and once this is in R-devel, we (those who install R-devel) can experiment with it. Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler wrote: > As I've found it is not at all hard to add an option which > checks the existence and if the directory is no longer "valid", > tries to recreate it (and if it fails doing that it calls the > famous R_Suicide(), as it does when R starts up and tempdir() > cannot be initialized correctly). Perhaps this can also fix the problem with mcparallel deleting the tempdir() when one of its children dies: file.exists(tempdir()) #TRUE parallel::mcparallel(q('no')) file.exists(tempdir()) # FALSE __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problems with S4 methods dispatching on `...` (aka dotsMethods)
I attempted to fix it, and that example seems to work for me. It's also a (passing) regression test in R. Are you sure you're using a new enough R-devel? On Tue, Apr 25, 2017 at 2:34 AM, Andrzej Oleś wrote: > Hi Michael, > > thanks again for your patch! I've tested it and I'm happy to confirm that > `callNextMethod()` works with methods dispatching on `...`. > > However, the second issue I reported still seems to be unresolved. Consider > the following toy example, where the `f()` calls differ in result depending > on whether the dispatch happens on a formal argument or the `...` argument. > > > f = function(x, ..., a = b) { > b = "missing 'a'" > print(a) > } > > f() > ## [1] missing 'a' > > f(a = 1) > ## [1] 1 > > setGeneric("f", signature = "x") > > # works as the non-generic version > f() > ## [1] missing 'a' > > setGeneric("f", signature = "...") > > # unexpectedly fails to find 'b' > f() > ## Error in print(a) : object 'b' not found > > > Any chances of fixing this? > > Cheers, > Andrzej > > > > On Fri, Apr 21, 2017 at 11:40 AM, Andrzej Oleś > wrote: >> >> Great, thanks Michael for you quick response! >> >> I started off with a question on SO because I was not sure whether this >> was an actual bug or I was just missing something obvious. I'm looking >> forward to the patch. >> >> Cheers, >> Andrzej >> >> >> On Thu, Apr 20, 2017 at 10:28 PM, Michael Lawrence >> wrote: >>> >>> Thanks for pointing out these issues. I have a fix that I will commit >>> soon. >>> >>> Btw, I would never have seen the post on Stack Overflow. It's best to >>> report bugs on the bugzilla. >>> >>> Michael >>> >>> On Thu, Apr 20, 2017 at 8:30 AM, Andrzej Oleś >>> wrote: >>> > Hi all, >>> > >>> > I recently encountered some unexpected behavior with S4 generics >>> > dispatching on `...`, which I described in >>> > >>> > http://stackoverflow.com/questions/43499203/use-callnextmethod-with-dotsmethods >>> > >>> > TL;DR: `callNextMethod()` doesn't work in methods dispatching on `...`, >>> > and >>> > arguments of such methods are resolved differently than the arguments >>> > of >>> > methods dispatching on formal arguments. >>> > >>> > Could this indicate a potential problem with the implementation of the >>> > `...` dispatch? >>> > >>> > Cheers, >>> > Andrzej >>> > >>> > [[alternative HTML version deleted]] >>> > >>> > __ >>> > R-devel@r-project.org mailing list >>> > https://stat.ethz.ch/mailman/listinfo/r-devel >> >> > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Problems with S4 methods dispatching on `...` (aka dotsMethods)
You're right, I must have mixed up my R versions when running the example, as the problem seems to be resolved in R-devel. Sorry for the noise and thanks again for fixing this. Andrzej On Tue, Apr 25, 2017 at 3:55 PM, Michael Lawrence wrote: > I attempted to fix it, and that example seems to work for me. It's > also a (passing) regression test in R. Are you sure you're using a new > enough R-devel? > > > On Tue, Apr 25, 2017 at 2:34 AM, Andrzej Oleś > wrote: > > Hi Michael, > > > > thanks again for your patch! I've tested it and I'm happy to confirm that > > `callNextMethod()` works with methods dispatching on `...`. > > > > However, the second issue I reported still seems to be unresolved. > Consider > > the following toy example, where the `f()` calls differ in result > depending > > on whether the dispatch happens on a formal argument or the `...` > argument. > > > > > > f = function(x, ..., a = b) { > > b = "missing 'a'" > > print(a) > > } > > > > f() > > ## [1] missing 'a' > > > > f(a = 1) > > ## [1] 1 > > > > setGeneric("f", signature = "x") > > > > # works as the non-generic version > > f() > > ## [1] missing 'a' > > > > setGeneric("f", signature = "...") > > > > # unexpectedly fails to find 'b' > > f() > > ## Error in print(a) : object 'b' not found > > > > > > Any chances of fixing this? > > > > Cheers, > > Andrzej > > > > > > > > On Fri, Apr 21, 2017 at 11:40 AM, Andrzej Oleś > > wrote: > >> > >> Great, thanks Michael for you quick response! > >> > >> I started off with a question on SO because I was not sure whether this > >> was an actual bug or I was just missing something obvious. I'm looking > >> forward to the patch. > >> > >> Cheers, > >> Andrzej > >> > >> > >> On Thu, Apr 20, 2017 at 10:28 PM, Michael Lawrence > >> wrote: > >>> > >>> Thanks for pointing out these issues. I have a fix that I will commit > >>> soon. > >>> > >>> Btw, I would never have seen the post on Stack Overflow. It's best to > >>> report bugs on the bugzilla. > >>> > >>> Michael > >>> > >>> On Thu, Apr 20, 2017 at 8:30 AM, Andrzej Oleś > >>> wrote: > >>> > Hi all, > >>> > > >>> > I recently encountered some unexpected behavior with S4 generics > >>> > dispatching on `...`, which I described in > >>> > > >>> > http://stackoverflow.com/questions/43499203/use-callnextmethod-with- > dotsmethods > >>> > > >>> > TL;DR: `callNextMethod()` doesn't work in methods dispatching on > `...`, > >>> > and > >>> > arguments of such methods are resolved differently than the arguments > >>> > of > >>> > methods dispatching on formal arguments. > >>> > > >>> > Could this indicate a potential problem with the implementation of > the > >>> > `...` dispatch? > >>> > > >>> > Cheers, > >>> > Andrzej > >>> > > >>> > [[alternative HTML version deleted]] > >>> > > >>> > __ > >>> > R-devel@r-project.org mailing list > >>> > https://stat.ethz.ch/mailman/listinfo/r-devel > >> > >> > > > [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
Chiming in late on this thread... > > | Are there any packages which > > | would break if a call to 'tempdir' automatically recreated this > > | directory? (Or would it be too much of a performance hit to have > > | 'tempdir' check and even just issue a warning when the directory is > > | found not to exist?) > > > | Should we have a timer which periodically updates > > | the modification time of tempdir()? What do other long-running > > | programs do (e.g. screen, emacs)? > > Valid questions, in my view. Before answering, let's try to see > how hard it would be to make the tempdir() function in R more versatile. Might this combination serve the purpose: * R session keeps an open handle on the tempdir it creates, * whatever tempdir harvesting cron job the user has be made sensitive enough not to delete open files (including open directories) > > As I've found it is not at all hard to add an option which > checks the existence and if the directory is no longer "valid", > tries to recreate it (and if it fails doing that it calls the > famous R_Suicide(), as it does when R starts up and tempdir() > cannot be initialized correctly). > > The proposed entry in NEWS is > >• tempdir(check=TRUE) recreates the tmpdir() if it is no longer valid. > > and of course the default would be status quo, i.e., check = FALSE, > and once this is in R-devel, we (those who install R-devel) can > experiment with it. > > Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
> Jeroen Ooms > on Tue, 25 Apr 2017 15:05:51 +0200 writes: > On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler > wrote: >> As I've found it is not at all hard to add an option >> which checks the existence and if the directory is no >> longer "valid", tries to recreate it (and if it fails >> doing that it calls the famous R_Suicide(), as it does >> when R starts up and tempdir() cannot be initialized >> correctly). > Perhaps this can also fix the problem with mcparallel > deleting the tempdir() when one of its children dies: > file.exists(tempdir()) #TRUE > parallel::mcparallel(q('no')) > file.exists(tempdir()) # FALSE Thank you, Jeroen, for the extra example. I now have comitted the new feature... (completely back compatible: in R's code tempdir() is not yet called with an argument and the default is check = FALSE ), actually in a "suicide-free" way ... which needed only slightly more code. In the worst case, one could save the R session by Sys.setenv(TEMPDIR = "") if for instance /tmp/ suddenly became unwritable for the user. What we could consider is making the default of 'check' settable by an option, and experiment with setting the option to TRUE, so all such problems would be auto-solved (says the incurable optimist ...). Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
Martin, Thanks for your work on this. One thing that seems to be missing from the conversation is that recreating the temp directory will prevent future failures when R wants to write a temp file, but the files will, of course, not be there. Any code written assuming the contract is that the temporary directory, and thus temporary files, will not be cleaned up before the R process exits (which was my naive assumption before this thread, and is the behavior AFAICT on all the systems I regularly use) will still break. I'm not saying that's necessarily fixable (though the R keeping a permanent pointer to a file in the dir suggested by Malcom might? fix it.), but I would argue if it IS fixable, a fix that includes that would be preferable. Best, ~G On Tue, Apr 25, 2017 at 8:53 AM, Martin Maechler wrote: > > Jeroen Ooms > > on Tue, 25 Apr 2017 15:05:51 +0200 writes: > > > On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler > > wrote: > >> As I've found it is not at all hard to add an option > >> which checks the existence and if the directory is no > >> longer "valid", tries to recreate it (and if it fails > >> doing that it calls the famous R_Suicide(), as it does > >> when R starts up and tempdir() cannot be initialized > >> correctly). > > > Perhaps this can also fix the problem with mcparallel > > deleting the tempdir() when one of its children dies: > >> file.exists(tempdir()) #TRUE >> parallel::mcparallel(q('no')) >> file.exists(tempdir()) # FALSE > > Thank you, Jeroen, for the extra example. > > I now have comitted the new feature... (completely back > compatible: in R's code tempdir() is not yet called with an > argument and the default is check = FALSE ), > actually in a "suicide-free" way ... which needed only slightly > more code. > > In the worst case, one could save the R session by >Sys.setenv(TEMPDIR = "") > if for instance /tmp/ suddenly became unwritable for the user. > > What we could consider is making the default of 'check' settable > by an option, and experiment with setting the option to TRUE, so > all such problems would be auto-solved (says the incurable optimist ...). > > Martin > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > -- Gabriel Becker, PhD Associate Scientist (Bioinformatics) Genentech Research [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
> Martin, > > Thanks for your work on this. > > One thing that seems to be missing from the conversation is that recreating > the temp directory will prevent future failures when R wants to write a > temp file, but the files will, of course, not be there. Any code written > assuming the contract is that the temporary directory, and thus temporary > files, will not be cleaned up before the R process exits (which was my > naive assumption before this thread, and is the behavior AFAICT on all the > systems I regularly use) will still break. > That is the kind of scenario I was hoping to obviate with my suggestion... > I'm not saying that's necessarily fixable (though the R keeping a permanent > pointer to a file in the dir suggested by Malcom might? fix it.), (and, FWIW, that's "Malcolm" with two "l"s. I think all those missing "l"s are flattened out versions of all the extra close parens I typed in the 80s that somehow got lost on the nets...))) > but I > would argue if it IS fixable, a fix that includes that would be preferable. Agreed! > > Best, > ~G > > On Tue, Apr 25, 2017 at 8:53 AM, Martin Maechler > > wrote: > > > > Jeroen Ooms > > > on Tue, 25 Apr 2017 15:05:51 +0200 writes: > > > > > On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler > > > wrote: > > >> As I've found it is not at all hard to add an option > > >> which checks the existence and if the directory is no > > >> longer "valid", tries to recreate it (and if it fails > > >> doing that it calls the famous R_Suicide(), as it does > > >> when R starts up and tempdir() cannot be initialized > > >> correctly). > > > > > Perhaps this can also fix the problem with mcparallel > > > deleting the tempdir() when one of its children dies: > > > >> file.exists(tempdir()) #TRUE > >> parallel::mcparallel(q('no')) > >> file.exists(tempdir()) # FALSE > > > > Thank you, Jeroen, for the extra example. > > > > I now have comitted the new feature... (completely back > > compatible: in R's code tempdir() is not yet called with an > > argument and the default is check = FALSE ), > > actually in a "suicide-free" way ... which needed only slightly > > more code. > > > > In the worst case, one could save the R session by > >Sys.setenv(TEMPDIR = "") > > if for instance /tmp/ suddenly became unwritable for the user. > > > > What we could consider is making the default of 'check' settable > > by an option, and experiment with setting the option to TRUE, so > > all such problems would be auto-solved (says the incurable optimist ...). > > > > Martin > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > > > > -- > Gabriel Becker, PhD > Associate Scientist (Bioinformatics) > Genentech Research > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Generate reproducible output independently of the build path
(please keep me CCd, I am not subscribed) Dear R developers, At the Reproducible Builds project we've been trying to get build tools and packages generate bit-for-bit identical output, even under different build paths. This is beneficial for users because they can more easily compare their builds with others, as well as other reasons. At the moment about 400 out of 26000 Debian packages are unreproducible due to how R writes paths.rds files as well as RDB files. An example diff is here: https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope-results/r-cran-tensor.html I've attached a patch (applies to both 3.3.3 and 3.4) that fixes this issue; however I know it's not perfect and would welcome feedback on how to make it acceptable to the R project. For example, I've tried to limit the effects of the patch only to the RDB loading/saving code, but I'm not familiar with the codebase so it would be good if someone could verify that I did this correctly. Then, ideally we would also add some tests to ensure that unreproduciblity does not crop back in "by accident". R code heavily relies on absolute paths, and I went down several dead-ends chasing and editing variables containing absolute paths, before I finally managed to get this working patch, so I suspect that without specific reproducibility tests, this issue might recur in the future. I've checked that the existing tests still pass, with this patch applied to the Debian package. I have some errors like: - Warning message: - In utils::packageDescription(basename(dir), dirname(dir)) : -no package 'cluster' was found -* checking R files for non-ASCII characters ... OK -* checking R files for syntax errors ... OK :* checking whether the package can be loaded ... ERROR but I also get the same errors when I build the unpatched Debian package. And if I run e.g. `Rscript -e 'library(cluster)'` with a patched Rscript, there is no error and the exit code is 0. Ximin -- GPG: ed25519/56034877E1F87C35 GPG: rsa4096/1318EFAC5FBBDBCE https://github.com/infinity0/pubkeys.git diff -u r-base-3.3.3/debian/changelog r-base-3.3.3/debian/changelog --- r-base-3.3.3.orig/src/library/base/R/namespace.R +++ r-base-3.3.3/src/library/base/R/namespace.R @@ -190,7 +190,8 @@ loadNamespace <- function (package, lib.loc = NULL, keep.source = getOption("keep.source.pkgs"), - partial = FALSE, versionCheck = NULL) + partial = FALSE, versionCheck = NULL, + relpath = FALSE) { libpath <- attr(package, "LibPath") package <- as.character(package)[[1L]] @@ -246,9 +247,9 @@ attr(dimpenv, "name") <- paste("lazydata", name, sep = ":") setNamespaceInfo(env, "lazydata", dimpenv) setNamespaceInfo(env, "imports", list("base" = TRUE)) -## this should be an absolute path -setNamespaceInfo(env, "path", - normalizePath(file.path(lib, name), "/", TRUE)) +path <- if (relpath) file.path(".", name) +else normalizePath(file.path(lib, name), "/", TRUE) +setNamespaceInfo(env, "path", path) setNamespaceInfo(env, "dynlibs", NULL) setNamespaceInfo(env, "S3methods", matrix(NA_character_, 0L, 3L)) env$.__S3MethodsTable__. <- --- r-base-3.3.3.orig/src/library/tools/R/admin.R +++ r-base-3.3.3/src/library/tools/R/admin.R @@ -785,7 +785,6 @@ .install_package_Rd_objects <- function(dir, outDir, encoding = "unknown") { -dir <- file_path_as_absolute(dir) mandir <- file.path(dir, "man") manfiles <- if(!dir.exists(mandir)) character() else list_files_with_type(mandir, "docs") --- r-base-3.3.3.orig/src/library/tools/R/makeLazyLoad.R +++ r-base-3.3.3/src/library/tools/R/makeLazyLoad.R @@ -29,7 +29,7 @@ if (packageHasNamespace(package, dirname(pkgpath))) { if (! is.null(.getNamespace(as.name(package stop("namespace must not be already loaded") -ns <- suppressPackageStartupMessages(loadNamespace(package, lib.loc, keep.source, partial = TRUE)) +ns <- suppressPackageStartupMessages(loadNamespace(package, lib.loc, keep.source, partial = TRUE, relpath = TRUE)) makeLazyLoadDB(ns, dbbase, compress = compress) } else --- r-base-3.3.3.orig/src/library/tools/R/parseRd.R +++ r-base-3.3.3/src/library/tools/R/parseRd.R @@ -62,6 +62,7 @@ basename <- basename(srcfile$filename) srcfile$encoding <- encoding srcfile$Enc <- "UTF-8" +srcfile$wd <- "." if (encoding == "ASCII") { if (any(is.na(iconv(lines, "", "ASCII" __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R_CMethodDef incompatibility (affects R_registerRoutines)
I recently noticed a change between R-3.3.3 and R-3.4.0 in the definition of the R_CMethodDef struct. typedef struct { const char *name; DL_FUNC fun; int numArgs; - R_NativePrimitiveArgType *types; - R_NativeArgStyle *styles; - } R_CMethodDef; I suspect this is the reason that packages installed by R-3.4.0 and loaded into R-3.3.3 will crash the latter if the package registers routines to be called from .C or .Fortran: % R-3.3.3 --quiet > library(sp, lib.loc=c("/home/R/R-3.4.0/lib64/R/site-library")) *** caught segfault *** address 0x7, cause 'memory not mapped' Traceback: 1: dyn.load(file, DLLpath = DLLpath, ...) 2: library.dynam(lib, package, package.lib) 3: loadNamespace(package, lib.loc) 4: doTryCatch(return(expr), name, parentenv, handler) 5: tryCatchOne(expr, names, parentenv, handlers[[1L]]) 6: tryCatchList(expr, classes, parentenv, handlers) 7: tryCatch(expr, error = function(e) {call <- conditionCall(e)if (!is.null(call)) {if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L)dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ")LONG <- 75Lmsg <- conditionMessage(e)sm <- strsplit(msg, "\n")[[1L]]w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w")if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b")if (w > LONG) prefix <- paste0(prefix, "\n ")}else prefix <- "Error : "msg <- paste0(prefix, conditionMessage(e), "\n").Internal(seterrmessage(msg[1L]))if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = stderr()).Internal(printDeferredWarnings()) }invisible(structure(msg, class = "try-error", condition = e))}) 8: try({attr(package, "LibPath") <- which.lib.locns <- loadNamespace(package, lib.loc)env <- attachNamespace(ns, pos = pos, deps)}) 9: library(sp, lib.loc = c("/home/R/R-3.4.0/lib64/R/site-library")) Possible actions: 1: abort (with core dump, if enabled) 2: normal R exit 3: exit R without saving workspace 4: exit R saving workspace Selection: Was removing the R_NativeArgStyle field intended? Bill Dunlap TIBCO Software wdunlap tibco.com [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] R on OpenHub
Does anyone want to manage the record for R on OpenHub? OpenHub is a site that records metrics for open source projects. At some point a record for R was created: https://www.openhub.net/p/r_project but there's no manager listed. OpenHub says: """ * Only someone who works on the project and has a close connection to it should apply. Ideally the owner, founder, lead developer, or release manager. """ So not me. Sounds more like it needs to be someone in R-core or an R Foundation luminary. OpenHub is used by the OSGeo Live DVD to generate metrics for all the projects on the DVD, and I've just been asked by the OSGeo Live team to inquire about this. If nobody wants to do it, no biggie, I think it just means the project information will be less accurate. Barry __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
On Tue, Apr 25, 2017 at 02:41:58PM +, Cook, Malcolm wrote: > Might this combination serve the purpose: > * R session keeps an open handle on the tempdir it creates, > * whatever tempdir harvesting cron job the user has be made sensitive > enough not to delete open files (including open directories) Good suggestion but doesn't work with the (increasingly popular) "Systemd": $ mkdir /tmp/somedir $ touch -d "12 days ago" /tmp/somedir/ $ cd /tmp/somedir/ $ sudo systemd-tmpfiles --clean $ ls /tmp/somedir/ ls: cannot access '/tmp/somedir/': No such file or directory I would advocate just changing 'tempfile()' so that it recreates the directory where the file is (the "dirname") before returning the file path. This would have fixed the issue I ran into. Changing 'tempdir()' to recreate the directory is another option. Thanks, Frederick __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] tempdir() may be deleted during long-running R session
Hi Gabriel, Thanks for asking for a better solution, as far as actually preventing temporary files from getting deleted in the first place. I still don't know very much about other peoples' distributions, but Arch uses Systemd which is the culprit on my system. Systemd's 'tmpfiles.d(5)' man page says we can create configuration files in locations like /usr/lib/tmpfiles.d/*.conf /etc/tmpfiles.d/*.conf which control when temporary files are deleted. There is an 'x' specifier which accepts glob paths and can protect everything in /tmp/Rtmp* ...: $ mkdir /tmp/Rtmpaoeu $ touch -d "12 days ago" /tmp/Rtmpaoeu $ sudo systemd-tmpfiles --clean $ ls /tmp/Rtmpaoeu ls: cannot access '/tmp/Rtmpaoeu': No such file or directory $ sudo sh -c "echo 'x /tmp/Rtmp*' > /etc/tmpfiles.d/R.conf" $ mkdir /tmp/Rtmpaoeu $ touch -d "12 days ago" /tmp/Rtmpaoeu $ sudo systemd-tmpfiles --clean $ ls /tmp/Rtmpaoeu (still there) I guess installing such a file is something that would be done by the various distribution-specific R packages. Even though I run R from a home-directory compiled version, I have my distribution's binary package installed globally, and so I would get the benefit of this protection from the distribution package. If this sounds like it makes sense then I can ask the Arch package maintainer to do it. Of course I don't need permission but it would be good to hear if I'm missing or forgetting something. Based on what other packages are doing the file should probably be named: /usr/lib/tmpfiles.d/R.conf and contain: x /tmp/Rtmp* (For example on my system I have stuff like this owned by various packages: $ pacman -Qo /usr/lib/tmpfiles.d/* /usr/lib/tmpfiles.d/apache.conf is owned by apache 2.4.25-1 /usr/lib/tmpfiles.d/bind.conf is owned by bind 9.11.0.P3-3 /usr/lib/tmpfiles.d/colord.conf is owned by colord 1.3.4-1 /usr/lib/tmpfiles.d/etc.conf is owned by systemd 232-8 /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.30.3-1 ... ) Thanks! Frederick On Tue, Apr 25, 2017 at 09:03:01AM -0700, Gabriel Becker wrote: > Martin, > > Thanks for your work on this. > > One thing that seems to be missing from the conversation is that recreating > the temp directory will prevent future failures when R wants to write a > temp file, but the files will, of course, not be there. Any code written > assuming the contract is that the temporary directory, and thus temporary > files, will not be cleaned up before the R process exits (which was my > naive assumption before this thread, and is the behavior AFAICT on all the > systems I regularly use) will still break. > > I'm not saying that's necessarily fixable (though the R keeping a permanent > pointer to a file in the dir suggested by Malcom might? fix it.), but I > would argue if it IS fixable, a fix that includes that would be preferable. > > Best, > ~G > > On Tue, Apr 25, 2017 at 8:53 AM, Martin Maechler > wrote: > > > > Jeroen Ooms > > > on Tue, 25 Apr 2017 15:05:51 +0200 writes: > > > > > On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler > > > wrote: > > >> As I've found it is not at all hard to add an option > > >> which checks the existence and if the directory is no > > >> longer "valid", tries to recreate it (and if it fails > > >> doing that it calls the famous R_Suicide(), as it does > > >> when R starts up and tempdir() cannot be initialized > > >> correctly). > > > > > Perhaps this can also fix the problem with mcparallel > > > deleting the tempdir() when one of its children dies: > > > >> file.exists(tempdir()) #TRUE > >> parallel::mcparallel(q('no')) > >> file.exists(tempdir()) # FALSE > > > > Thank you, Jeroen, for the extra example. > > > > I now have comitted the new feature... (completely back > > compatible: in R's code tempdir() is not yet called with an > > argument and the default is check = FALSE ), > > actually in a "suicide-free" way ... which needed only slightly > > more code. > > > > In the worst case, one could save the R session by > >Sys.setenv(TEMPDIR = "") > > if for instance /tmp/ suddenly became unwritable for the user. > > > > What we could consider is making the default of 'check' settable > > by an option, and experiment with setting the option to TRUE, so > > all such problems would be auto-solved (says the incurable optimist ...). > > > > Martin > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > > > > -- > Gabriel Becker, PhD > Associate Scientist (Bioinformatics) > Genentech Research > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > __ R-devel@r-project.org mailing list htt