[Rd] topenv of emptyenv
I was surprised just now to find out that `topenv(emptyenv())` equals … `.GlobalEnv`, not `emptyenv()`. From my understanding of the description of `topenv`, it should walk up the chain of enclosing environments (as if by calling `e = parent.env(e)` repeatedly; in fact, that is almost exactly its implementation in envir.c) until it hits a top level. However, `emptyenv()` has no enclosing environments so it should be its own top-level environment (I thought). Unfortunately the documentation on environments is relatively sparse, and the R Internals document doesn’t mention top-level environments. Concretely, I encountered this in the following code, which signals an error if `env` is the empty environment: while (! some_complex_condition(env) && ! identical(env, toplevel(env))) { env = parent.env(env) } Of course there’s a trivial workaround (add an identity check for `emptyenv()` in the while loop condition) but it got me wondering if there’s a rationale for this result or if it’s “accidental”/arbitrary: the C `topenv` implementation defaults to returning R_GlobalEnv for an empty environment. Is this effect actually useful (and used anywhere)? This is in R 3.4.4 but I can’t find an indication that this behaviour was ever changed. Cheers -- Konrad Rudolph __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] topenv of emptyenv
On Thu, Mar 28, 2019 at 11:42 AM Martin Maechler wrote: > So from that definition it must return .Globalenv in this > particular case. Indeed, that makes sense. Apparently the note wasn’t explicit enough for me to make the connection. -- Konrad Rudolph __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] S3 lookup rules changed in R 3.6.1
tl;dr: S3 lookup no longer works in custom non-namespace environments as of R 3.6.1. Is this a bug? I am implementing S3 dispatch for generic methods in environments that are not packages. I am trying to emulate the R package namespace mechanism by having a “namespace” environment that defines generics and methods, but only exposes the generics themselves, not the methods. To make S3 lookup work when using the generics, I am using `registerS3method`. While this method itself has no extensive documentation, the documentation of `UseMethod` contains this relevant passage: > Namespaces can register methods for generic functions. To support this, > ‘UseMethod’ and ‘NextMethod’ search for methods in two places: in the > environment in which the generic function is called, and in the registration > data base for the environment in which the generic is defined (typically a > namespace). So methods for a generic function need to be available in the > environment of the call to the generic, or they must be registered. (It does > not matter whether they are visible in the environment in which the generic is > defined.) As from R 3.5.0, the registration data base is searched after the > top level environment (see ‘topenv’) of the calling environment (but before > the parents of the top level environment). This used to work but it stopped working in R 3.6.1 and I cannot figure out (a) why, and (b) how to fix it. Unfortunately I am unable to find the relevant information by reading the R source code, even when “diff”ing what seem to be the only even remotely relevant changes [1]. The R NEWS merely list the following change for R 3.6.0: > * S3method() directives in ‘NAMESPACE’ can now also be used to perform delayed > S3 method registration. > […] > * Method dispatch uses more relevant environments when looking up class > definitions. Unfortunately it is not clear to me what exactly this means. Here’s a minimal example code that works under R 3.5.3 but breaks under R 3.6.1 (I don’t know about 3.6.0). ``` # Define “package namespace”: ns = new.env(parent = .BaseNamespaceEnv) local(envir = ns, { test = function (x) UseMethod('test') test.default = function (x) message('test.default') test.foo = function (x) message('test.foo') .__S3MethodsTable__. = new.env(parent = .BaseNamespaceEnv) .__S3MethodsTable__.$test.default = test.default .__S3MethodsTable__.$test.foo = test.foo # Or, equivalently: # registerS3method('test', 'default', test.default) # registerS3method('test', 'foo', test.foo) }) # Expose generic publicly: test = ns$test # Usage: test(1) test(structure(1, class = 'foo')) ``` Output in R up to 3.5.3: ``` test.default test.foo ``` Output in R 3.6.1: ``` Error in UseMethod("test") : no applicable method for 'test' applied to an object of class "c('double', 'numeric')" ``` It’s worth noting that the output of `.S3methods` is the same for all R versions, and from my understanding of its output, this *should* indicate that S3 lookup should behave identically, too. Furthermore, lookup via `getS3method` succeeds in all R versions, and (again, in my understanding) the logic of this function should be identical to the logic of R’s internal S3 dispatch: ``` getS3method('test', 'default')(1) getS3method('test', 'foo')(1) ``` Conversely, specialising an existing generic from a loaded package works. E.g.: ``` local(envir = ns, { print.foo = function (x) message('print.foo') registerS3method('print', 'foo', print.foo) }) print(structure(1, class = 'foo')) ``` This prints “print.foo” in all R versions as expected. So my question is: Why do the `test(…)` calls in R 3.6.1 no longer trigger S3 method lookup in the generic function’s environment? Is this behaviour by design or is it a bug? If it’s by design, why does `getS3method` still use the old behaviour? And, most importantly, how can I fix my definition of `ns` to make S3 dispatch for non-exposed methods work again? … actually I just found a workaround: ``` ns$.packageName = 'not important' ``` This marks `ns` as a package namespace. To me, the documentation seems to imply that this shouldn’t be necessary (and it previously wasn’t). Furthermore, the code for `registerS3method` explicitly supports non-package namespace environments. Unfortunately this workaround is not satisfactory because pretending that the environment is a package namespace, when it really isn’t, might break other things. [1] See r75273; there’s also r74625, which changes the actual lookup mechanism used by `UseMethod`, but that seems even less relevant, because it is disabled unless a specific environment variable is set. -- Konrad Rudolph [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] S3 lookup rules changed in R 3.6.1
Oh, I had missed that that code path is now enabled by default. It’s worth noting that the commented-out test in that commit also still succeeds if invoked via `getS3method`. So at the very least there’s now an inconsistency in the lookup performed by R internally (via `UseMethod`) and `getS3method`, which is probably unintentional. I see how the change is beneficial by preventing surprising behaviour in a corner case. Unfortunately it also breaks at least one published package [1], and if I understand correctly it no longer conforms to the documented behaviour (quoted in my initial message), which even explicitly mentions non-namespace environments. [1] https://github.com/klmr/modules/issues/147 On Wed, Oct 9, 2019 at 11:23 PM Duncan Murdoch wrote: > On 09/10/2019 3:22 p.m., Konrad Rudolph wrote: > > tl;dr: S3 lookup no longer works in custom non-namespace environments as > of > > R 3.6.1. Is this a bug? > > I don't know whether this was intentional or not, but a binary search > through the svn commits finds that the errors started in this one: > > > r75127 | hornik | 2018-08-13 09:58:47 -0400 (Mon, 13 Aug 2018) | 2 lines > Changed paths: > M /trunk/src/main/objects.c > M /trunk/tests/reg-tests-1a.R > > Have S3 methods lookup by default look for the S3 registry in the topenv > of the generic. > > > Duncan Murdoch > > > > > I am implementing S3 dispatch for generic methods in environments that > are > > not > > packages. I am trying to emulate the R package namespace mechanism by > > having a > > “namespace” environment that defines generics and methods, but only > exposes > > the > > generics themselves, not the methods. > > > > To make S3 lookup work when using the generics, I am using > > `registerS3method`. > > While this method itself has no extensive documentation, the > documentation > > of > > `UseMethod` contains this relevant passage: > > > >> Namespaces can register methods for generic functions. To support this, > >> ‘UseMethod’ and ‘NextMethod’ search for methods in two places: in the > >> environment in which the generic function is called, and in the > > registration > >> data base for the environment in which the generic is defined > (typically a > >> namespace). So methods for a generic function need to be available in > the > >> environment of the call to the generic, or they must be registered. (It > > does > >> not matter whether they are visible in the environment in which the > > generic is > >> defined.) As from R 3.5.0, the registration data base is searched after > > the > >> top level environment (see ‘topenv’) of the calling environment (but > > before > >> the parents of the top level environment). > > > > This used to work but it stopped working in R 3.6.1 and I cannot figure > out > > (a) > > why, and (b) how to fix it. Unfortunately I am unable to find the > relevant > > information by reading the R source code, even when “diff”ing what seem > to > > be > > the only even remotely relevant changes [1]. > > > > The R NEWS merely list the following change for R 3.6.0: > > > >> * S3method() directives in ‘NAMESPACE’ can now also be used to perform > > delayed > >>S3 method registration. > >> […] > >> * Method dispatch uses more relevant environments when looking up class > >>definitions. > > > > Unfortunately it is not clear to me what exactly this means. > > > > Here’s a minimal example code that works under R 3.5.3 but breaks under > > R 3.6.1 > > (I don’t know about 3.6.0). > > > > ``` > > # Define “package namespace”: > > ns = new.env(parent = .BaseNamespaceEnv) > > local(envir = ns, { > > test = function (x) UseMethod('test') > > test.default = function (x) message('test.default') > > test.foo = function (x) message('test.foo') > > > > .__S3MethodsTable__. = new.env(parent = .BaseNamespaceEnv) > > .__S3MethodsTable__.$test.default = test.default > > .__S3MethodsTable__.$test.foo = test.foo > > > > # Or, equivalently: > > # registerS3method('test', 'default', test.default) > > # registerS3method('test', 'foo', test.foo) > > }) > > > > # Expose generic publicly: > > test = ns$test > > > > # Usage: > > test(1) > > t
[Rd] Operator precedence of =, <- and ?
The documentation (help("Syntax")) gives the operator precedence of the assignment operators and help, from highest to lowest, as: ‘<- <<-’ assignment (right to left) ‘=’assignment (right to left) ‘?’help (unary and binary) If I understand correctly this implies that `a = b ? c` and `a <- b ? c` should parse identically. Or, if using the unary version, `?a = b` and `?a <- b` should parse identically. However, as noted by Antoine Fabri on Stack Overflow [1], they have different parses (on R 3.5.3 and 3.6.1, at least), which puts the precedence of `?` *between* that of `<-` and `=`. In fact, src/main/gram.y [2] appears to show the same precedence table as the documentation; presumably the parser at some point rewrites the parse tree manually. At any rate, should this be fixed in the documentation? Or is the documentation “correct”, and there’s a bug in the parser (in some versions of R)? [1] < https://stackoverflow.com/questions/1741820/51564252#comment105506343_51564252 > [2] < https://github.com/wch/r-source/blob/386c3a93cbcaf95017fa6ae52453530fb95149f4/src/main/gram.y#L384-L390 > -- Konrad Rudolph [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] How do I reliably and efficiently hash a function?
I’ve got the following scenario: I need to store information about an R function, and retrieve it at a later point. In other programming languages I’d implement this using a dictionary with the functions as keys. In R, I’d usually use `attr(f, 'some-name')`. However, for my purposes I do not want to use `attr` because the information that I want to store is an implementation detail that should be hidden from the user of the function (and, just as importantly, it shouldn’t clutter the display when the function is printed on the console). `comment` would be almost perfect since it’s hidden from the output when printing a function — unfortunately, the information I’m storing is not a character string (it’s in fact an environment), so I cannot use `comment`. How can this be achieved? For reference, I’ve considered the following two alternatives: 1. Use `attr`, and override `print.function` to not print my attribute. However, I’m wary of overriding a core function just to implement such a little thing, and overriding this function would obviously clash with other overrides, if somebody else happens to have a similarly harebrain idea. 2. Use C++ to retrieve the SEXP to the body of the CLOSXP that represents a function, and use that as a key in a dictionary. I *think* that this robustly and efficiently identifies functions in R. However, this relies quite heavily on R internal implementation details, and in particular on the fact that the GC will not move objects around in memory. The current GC doesn’t do this but Gábor Csárdi rightfully pointed out to me that this might change. On the chance that I’m trying to solve the wrong Y to an X/Y problem, the full context to the above problem is explained in [1]. In a nutshell, I am hooking a new environment into a function’s parent.env chain, by re-assigning the function’s `parent.env` (naughty, I know): ``` parent.env(my_new_env) = parent.env(f) parent.env(f) = my_new_env ``` This is done so that the function `f` finds objects defined inside that environment without having to attach it globally. However, for bookkeeping purposes I need to preserve the original parent environment — hence the question. [1]: https://github.com/klmr/modules/issues/66 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] How do I reliably and efficiently hash a function?
Thanks. I know about `local` (and functions within functions). In fact, the functions are *already* defined inside their own environment (same as what `local` does). But unfortunately this doesn’t solve my problem, since the functions’ parent environment gets changed during the function’s execution, and I need to retrieve my stored data *after* that point, inside the function. I’ve tried to create a more exact example of what’s going on — unfortunately it’s really hard to simplify the problem without losing crucial details. Since the code is just a tad too long, I’ve posted it as a Github Gist: https://gist.github.com/klmr/53c9400e832d7fd9ea5c The function `f` in the example calls `get_meta()` twice, and gets different results before and after calling an ancillary function that modifies the function’s `parent.env`. I want it to return the same information (“original”) both times. On Fri, Dec 11, 2015 at 10:49 AM, Mark van der Loo wrote: > In addition to what Charles wrote, you can also use 'local' if you don't > want a function that creates another function. > >> f <- local({info <- 10; function(x) x + info}) >> f(3) > [1] 13 > > best, > Mark > > > Op vr 11 dec. 2015 om 03:27 schreef Charles C. Berry : >> >> On Thu, 10 Dec 2015, Konrad Rudolph wrote: >> >> > I’ve got the following scenario: I need to store information about an >> > R function, and retrieve it at a later point. In other programming >> > languages I’d implement this using a dictionary with the functions as >> > keys. In R, I’d usually use `attr(f, 'some-name')`. However, for my >> > purposes I do not want to use `attr` because the information that I >> > want to store is an implementation detail that should be hidden from >> > the user of the function (and, just as importantly, it shouldn’t >> > clutter the display when the function is printed on the console). >> > >> > `comment` would be almost perfect since it’s hidden from the output >> > when printing a function — unfortunately, the information I’m storing >> > is not a character string (it’s in fact an environment), so I cannot >> > use `comment`. >> > >> > How can this be achieved? >> > >> >> See >> >> https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Scope >> >> For example, these commands: >> >> foo <- function() {info <- "abc";function(x) x+1} >> func <- foo() >> find("func") >> func(1) >> ls(envir=environment(func)) >> get("info",environment(func)) >> func >> >> Yield these printed results: >> >> : [1] ".GlobalEnv" >> : [1] 2 >> : [1] "info" >> : [1] "abc" >> : function (x) >> : x + 1 >> : >> >> The environment of the function gets printed, but 'info' and other >> objects that might exist in that environment do not get printed unless >> you explicitly call for them. >> >> HTH, >> >> Chuck >> >> p.s. 'environment(func)$info' also works. >> __ >> R-devel@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] How do I reliably and efficiently hash a function?
@Jeroen, here’s what I’m solving with my hacking the parent environment chain: I’m essentially re-implementing `base::attach` — except that I’m attaching objects *locally* in the function instead of globally. I don’t think this can be done in any way except by modifying the parent environment chain. Incidentally, package namespaces do largely the same thing. The difference is that they only need to do it *once* (when loaded), and subsequent function calls do not modify this chain. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] How do I reliably and efficiently hash a function?
On Fri, Dec 11, 2015 at 1:26 PM, Hadley Wickham wrote: > Why not use your own S3 class? Yes, I’ll probably do that. Thanks. I honestly don’t know why I hadn’t thought of that before, since I’m doing the exact same thing in another context [1]. [1]: https://github.com/klmr/decorator/blob/2742b398c841bac53acb6607a4d220aedf10c26b/decorate.r#L24-L36 __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] On implementing zero-overhead code reuse
Check out ‹klmr/modules› on Github (distinct from CRAN’s ‹modules›!). It looks pretty much exactly like what you want: https://github.com/klmr/modules It has an extensive README and vignette explaining the usage. Cheers, Konrad -- Konrad Rudolph On Sun, 2 Oct 2016 at 18:31 Kynn Jones wrote: > I'm looking for a way to approximate the "zero-overhead" model of code > reuse available in languages like Python, Perl, etc. > > I've described this idea in more detail, and the motivation for this > question in an earlier post to R-help > (https://stat.ethz.ch/pipermail/r-help/2016-September/442174.html). > > (One of the responses I got advised that I post my question here instead.) > > The best I have so far is to configure my PROJ_R_LIB environment > variable to point to the directory with my shared code, and put a > function like the following in my .Rprofile file: > > import <- function(name){ > ## usage: > ## import("foo") > ## foo$bar() > path <- file.path(Sys.getenv("PROJ_R_LIB"),paste0(name,".R")) > if(!file.exists(path)) stop('file "',path,'" does not exist') > mod <- new.env() > source(path,local=mod) > list2env(setNames(list(mod),list(name)),envir=parent.frame()) > invisible() > } > > (NB: the idea above is an elaboration of the one I showed in my first > post.) > > But this is very much of an R noob's solution. I figure there may > already be more solid ways to achieve "zero-overhead" code reuse. > > I would appreciate any suggestions/critiques/pointers/comments. > > TIA! > > kj > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Calling a replacement function in a custom environment
Hello all, I am wondering whether it’s at all possible to call a replacement function in a custom environment. From my experiments this appears not to be the case, and I am wondering whether that restriction is intentional. To wit, the following works: x = 1 base::is.na(x) = TRUE However, the following fails: x = 1 b = baseenv() b$is.na(x) = TRUE The error message is "invalid function in complex assignment". Grepping the R code for this error message reveals that this behaviour seems to be hard-coded in function `applydefine` in src/main/eval.c: the function explicitly checks for `::` and :::` and permits those assignments, but has no equivalent treatment for `$`. Am I overlooking something to make this work? And if not — unless there’s a concrete reason against it, could it be considered to add support for this syntax, i.e. for calling a replacement function by `$`-subsetting the defining environment, as shown above? Cheers, Konrad -- Konrad Rudolph // @klmr [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] [External] Re: Calling a replacement function in a custom environment
> > I do not think it is a reasonable suggestion. The reasons a::b and > a:::b were made to work is that many users read these as a single > symbol, not a call to a binary operator. So supporting this helped to > reduce confusion. > Conceptually, the same is true for `a$b` when `a` is used as an environment. In fact, for environments `$` logically acts as a scope resolution operator in much the same way `::` does. This usage exists notably for ‘R6’ classes and ‘box’ modules, and the fact that replacement functions cannot be called in such scenarios is a confusing limitation for users of these packages. In fact, if the aim of allowing replacement function calls for `a::b` is to reduce confusion, the same argument applies to `a$b` (and users don’t understand why the former works but the latter does not — I’d even argue that the current inconsistency *increases* rather than reduces confusion). > In any case, complicating the complex assignment code, which is > already barely maintainable, would be a very bad idea. I generally agree with this, but the current behaviour is inconsistent, confusing, and breaks seemingly-straightforward and actively useful code. Cheers, Konrad -- Konrad Rudolph // @klmr [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] RFC: API design of package "modules"
Some time ago I’ve published the first draft of the package “modules” [1] which aims to provide a module system as an alternative to packages for R. Very briefly, this is aimed to complement the existing package system for very small code units which do not require the (small, but existing) overhead associated with writing a package. I’ve noticed that people around me put off writing packages (and thus, reusable code) due to that, and use `source` instead. Modules would work (in many cases) as a drop-in replacement for `source`, and could thus encourage code reuse. However, now I’m stuck on a particular aspect of the API and would like to solicit feedback from r-devel. `import('foo')` imports a given module, `foo`. In addition to other differences detailed in [2], modules allow/impose a hierarchical organisation. That way, `import('foo')` might load code from a file called `foo.r` or from a file called `foo/__init__.r` (reminiscent of Python’s module mechanism) and `import('foo/bar')` would load a file `foo/bar.r` or `foo/bar/__init__r.` [3]. `import` also allows selectively importing only some functions, so that a user might write `import('foo', c('f', 'g'))` to only import the functions `f` and `g`. However, at the moment modules don’t allow the equivalent of Python’s `from foo import bar` for nested modules. That is, if I have two nested modules `bar` and `baz`, I cannot import both of them in one `import` statement, I need two (`import('foo/bar'); import('foo/baz')`). I would like feedback on what people think is the best way of solving this. Here are some suggestions I’ve gathered; in the following, `foo`, `bar`, `qux` are (sub)modules. `f1`, `b1`, `b2`, `q1` … are functions within the modules whose name starts with the same letter: (1) Use of Bash-like wildcards to specify which modules to import: ``` foo = import('foo') # Exposes `foo$f1`, `foo$f2` …, but no submodules bar = import('foo/bar') # Exposes `bar$b1`, `bar$b2` foo = import('foo/{bar,qux}') # Exposes `foo$f1`, `foo$bar$b1`, `foo$bar$b2`, `foo$qux$q1` etc. foo = import('foo/*') # Exposes everything # Specifying which functions to import: foo = import('foo/{bar,baz}', c('bar$b1', qux$q1')) # Exposes `foo$bar$b1`, `foo$qux$q1` but NOT `foo$f1`, `foo$bar$b2` etc. ``` This is straightforward, but I feel vaguely that it’s too stringly typed [4]. A colleague dislikes this proposal because it treats nested modules and functions unequal: as mentioned above, `import('foo', 'f')` will import only `f` from `foo`. His argument is that there should be a uniform way of specifying which nested modules or functions to import – somewhat analogously to Python’s mechanism, where `from a import b` might import a submodule *or* an object `b`. (2) Treat submodules and functions uniformly, one per argument: ``` foo = import('foo') # Exposes `foo$f1`, `foo$f2` …, but no submodules bar = import('foo/bar') # Exposes `bar$b1`, `bar$b2` foo = import('foo/f1', 'foo/bar', 'foo/qux/q1') # Exposes `foo$f1`, `foo$bar$b1`, `foo$bar$b2`, `foo$qux$q1`. ``` However, this has the disadvantage of cramming even more functionality into the first argument and using stringly typing for everything instead of using “proper” function arguments. (3) Drop the whole thing, force people to use a separate `import` statement for every submodule (.NET does this for namespace imports, but then, .NET’s namespaces don’t implement a module system): ``` foo = import('foo') # Exposes `foo$f1`, `foo$f2` …, but no submodules bar = import('foo/bar') # Exposes `bar$b1`, `bar$b2` foo = import('foo', 'f1') # Exposes `foo$f1` bar = import('foo/bar') # Exposes `bar$b1`, `bar$b2` … ``` (4) Something else? So this is my question: what do other people think? Which is the most useful and least confusing alternative from the users’ perspective? [1]: https://github.com/klmr/modules [2]: https://github.com/klmr/modules/blob/master/README.md#feature-comparison [3] The original syntax for this was `import(foo)` and `import(foo.bar)`, respectively, but Hadley convinced me to drop non-standard argument evaluation. I’m still not convinced that NSE is actually harmful here, but I’m likewise not convinced that it’s beneficial (although I personally like it in this case). [4]: http://c2.com/cgi/wiki?StringlyTyped Kind regards, Konrad __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Is the tcltk failure in affylmGUI related to R bug 15957
Just as an FYI, I suspect the sudden break is connected to a bug report I filed some time ago [1], and a subsequent fix by Duncan. Long story short, the previous behaviour of tcltk was actually buggy. The fix changed this behaviour to what Peter has explained, with the unintended consequence of breaking some code. I believe that Duncan’s fix actually does “the right thing”. But I apologise for the inconvenience this caused. [1] https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15970 (PS: apologies for sending this again, Keith; I just realised I hadn’t sent it to the list.) __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel