Re: [Rd] IO error when writing to disk
Hello, I have sent a mail but I got no answer. Can you create a bugzilla account for me. Thanks, Jean-Sébastien Bevilacqua 2017-03-20 10:24 GMT+01:00 realitix : > Hello, > Here a small improvement for R. > > When you use the function write.table, if the disk is full for example, > the function doesn't return an error and the file is written but truncated. > > It can be a source of mistakes because you can then copy the output file > and think everything is ok. > > How to reproduce > - > > >> write.csv(1:1000, 'path') > > You must have a path with a small amount of disk available (on linux: > http://souptonuts.sourceforge.net/quota_tutorial.html) > > I have joined the patch in this email. > Can you open a bugzilla account for me to keep track of this change. > > Thanks, > Jean-Sébastien Bevilacqua > [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] IO error when writing to disk
> realitix > on Wed, 22 Mar 2017 10:17:54 +0100 writes: > Hello, > I have sent a mail but I got no answer. All work here happens on a volunteer basis... and it seems everybody was busy or not interested. > Can you create a bugzilla account for me. I've done that now. Note that your prposed patch did contain a bit too many "copy & paste" repetitions... which I personally would have liked to be written differently, using a wrapper (function or macro). Also, let's assume on Linux, would there be a way to create a small, say 1 MB, temporary file system as a non-root user? In that case, we could do all the testing from inside R .. Best, Martin Maechler > Thanks, > Jean-Sébastien Bevilacqua > 2017-03-20 10:24 GMT+01:00 realitix : >> Hello, >> Here a small improvement for R. >> >> When you use the function write.table, if the disk is full for example, >> the function doesn't return an error and the file is written but truncated. >> >> It can be a source of mistakes because you can then copy the output file >> and think everything is ok. >> >> How to reproduce >> - >> >> >> write.csv(1:1000, 'path') >> >> You must have a path with a small amount of disk available (on linux: >> http://souptonuts.sourceforge.net/quota_tutorial.html) >> >> I have joined the patch in this email. >> Can you open a bugzilla account for me to keep track of this change. >> >> Thanks, >> Jean-Sébastien Bevilacqua >> > [[alternative HTML version deleted]] > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Hyperbolic tangent different results on Windows and Mac
This looks like a bug in mingw-w64 CRT. The problem can be produced with C++ without R: #include #include #include int main(){ std::cout << std::fixed; std::complex z(356, 0); std::cout << "tanh" << z << " = " << std::tanh(z) << " (tanh(356) = " << std::tanh(356) << ")\n"; } On OS-X we get: tanh(356.00,0.00) = (1.00,-0.00) (tanh(356) = 1.00) But on Windows we get: tanh(356.00,0.00) = (nan,0.00) (tanh(356) = 1.00) I was also able to reproduce the problem with gcc 6.3 in msys2 so it has not been fixed upstream. You should file a bug report for mingw-w64. FWIF, we have run into NaN edge-case bugs before with mingw-w64. - https://sourceforge.net/p/mingw-w64/mingw-w64/ci/6617ebd5fc6b790c80071d5b1d950e737fc670e1/ - https://github.com/wch/r-source/commit/e9aaf8fdeddf27c2a9078cd214a41475c8ff6f40 I am cc'ing Ray Donnelly who is an expert on mingw-w64. __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] RFC: (in-principle) native unquoting for standard evaluation
On Mon, Mar 20, 2017 at 8:00 AM, Hadley Wickham wrote: > On Mon, Mar 20, 2017 at 7:36 AM, Radford Neal wrote: >> Michael Lawrence (as last in long series of posters)... >> >>> Yes, it would bind the language object to the environment, like an >>> R-level promise (but "promise" of course refers specifically to just >>> _lazy_ evaluation). >>> >>> For the uqs() thing, expanding calls like that is somewhat orthogonal >>> to NSE. It would be nice in general to be able to write something like >>> mean(x, extra_args...) without resorting to do.call(mean, c(list(x), >>> extra_args)). If we had that then uqs() would just be the combination >>> of unquote and expansion, i.e., mean(x, @extra_args...). The "..." >>> postfix would not work since it's still a valid symbol name, but we >>> could come up with something. >> >> >> I've been trying to follow this proposal, though without tracking down >> all the tweets, etc. that are referenced. I suspect I'm not the only >> reader who isn't clear exactly what is being proposed. I think a >> detailed, self-contained proposal would be useful. > > We have a working implementation (which I'm calling tidyeval) in > https://github.com/hadley/rlang, but we have yet to write it up. We'll > spend some time documenting since it seems to be of broader interest. First pass at programming dplyr vignette (including details about tidyeval) at http://rpubs.com/hadley/dplyr-programming Hadley -- http://hadley.nz __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Experimental CXX_STD problem in R 3.4
On Mon, 2017-03-20 at 16:38 +0100, Jeroen Ooms wrote: > On Sun, Mar 19, 2017 at 9:09 PM, Martyn Plummer > wrote: > > I have just added some code to ensure that the compilation fails > > with an informative error message if a specific C++ standard is > > requested but the corresponding compiler has not been defined. > > Please test this. > > Are you sure we shouldn't just fall back on a previous standard > instead of failing? For example if the package author has specified a > preference for CXX14 but the compiler only has CXX11, the package > might still build with -std=c++11 (given that C++14 is only a small > extension on the C++11 standard). > > The current behavior (in R 3.3) for packages with "CXX_STD=CXX11" is > to fall back on CXX when the compiler does not have CXX1X. I don't think that is true. > Will R-3.4 > start failing these packages? This would affect many users on CentOS 6 > (gcc 4.4.7). The major issue with long-term support platforms like CentOS is that the compiler is rather old. According to the GCC web site, 4.4.7 has partial support for C++11 via the -std=c++0x flag ( https://gcc.gnu.org /projects/cxx-status.html#cxx11 ). The problem is that the tests for C++11 compliance used by R's configure script have become much more stringent. If g++ 4.4.7 passed before, it is unlikely to pass now. This is an issue that I discussed here. https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17189 This creates a regression on older platforms. Some packages that used only a few C++11 features used to compile correctly but now don't because the compiler is no longer recognized as conforming to the C++11 standard (and to be fair it never did but the previous tests were weaker). What I suggest is that on these platforms you do a post-install patch of etc/Makeconf and set the variables for the C++11 compiler manually (CXX11, CXX11FLAGS, CXX11PICFLAGS, CXX11STD, SHLIB_CXX11LD, SHLIB_CXX11LDFLAGS). Martyn __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] RFC: (in-principle) native unquoting for standard evaluation
RN> There is an opportunity cost to grabbing the presently-unused unary @ RN> operator for this I don't think this is the case because the parser has to interpret `@` in formal argument lists in a different way than in function calls. Besides, it'd make sense to set up these annotations with a binary `@`. There are already two main ways of passing arguments in R: by value and by expression. Providing an explicit annotation for passing by expression would standardise the semantics of these functions and, as Michael suggests, would help static analysis. So passing by name could be just another argument-passing method: function(expr@ x, value@ y = 10L, name@ z = rnorm(1)) { list(x, y, z, z) } The parser would record the argument metadata in the formals list. This metadata could be consulted by static analysis tools and a selected subset of those tags (`expr` and `name`) would have an effect on the evaluation mechanism. RN> One thing I'm not clear on is whether the proposal would add anything RN> semantically beyond what the present "eval" and "substitute" functions RN> can do fairly easily. Quasiquotation makes it possible to program with functions that take arguments by expression. There is no easy way to do that with eval() and substitute() alone. R has always been an interface language and as such, its main advantage is to provide DSLs for data analysis tasks. Specification of statistical models with a formula, overscoping data frame columns with subset() and transform(), etc. This is why providing an easier means of programming with these functions seems to have more value than call-by-name semantics. Unquoting will be used extensively in ggplot2 and dplyr, two popular R packages. Please see the vignette posted by Hadley for some introductory examples. In any case, the unquoting notation would be orthogonal to function arguments annotations because actuals and formals are parsed differently. While formals annotations would be recorded by the parser in the formals list, `@` in an actual argument list would be parsed as a function call, like the rest of R operators. Lionel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] RFC: (in-principle) native unquoting for standard evaluation
ML> For the uqs() thing, expanding calls like that is somewhat orthogonal ML> to NSE. It would be nice in general to be able to write something like ML> mean(x, extra_args...) without resorting to do.call(mean, c(list(x), ML> extra_args)). This is not completely true because splicing is necessarily linked to the principle of unquoting (evaluating). You cannot splice something that you don't know the value of, you have to evaluate the promise of the splicing operand. In other words, you cannot splice at the parser level, only at the interpreter level, and the splicing operation has to be part of the call tree. This implies the important limitation that you cannot splice a list in a call to a function taking named arguments, you can only splice when capturing dots. On the plus side, it seems more R-like to implement it as a regular function call since all syntactic operations in R are function calls. Since splicing is conceptually linked to unquoting, I think it would make sense to have a derivative operator, e.g. @@. In that case it would simply take its argument by expression and could thus be defined as: `@@` <- `~`. It'd be used like this: # Equivalent to as.list(mtcars) list(@@ mtcars) # Returns a list of symbols list(@@ lapply(letters, as.symbol)) To make it work we'd have two functions for capturing dots that would understand arguments wrapped in an `@@` quosure. dotsValues(...) would expand spliced arguments and then evaluate them, while dotsExprs(...) would expand and return a list of quosures. Dotted primitive functions like list() or c() would also need to preprocess the dots with a C function. Another reason not to use `...` as syntax for splicing is that it may be better to reserve it for forwarding operations. I think one other syntax update that would be worthwile to consider is forwarding of named arguments. This would allow labelling of arguments to work transparently across wrappers: my_plot <- function(x) plot(1:10, ...(x)) # The y axis is correctly labelled as 11:20 in the plot my_plot(11:20) And this would also allow to forward named arguments to functions taking their arguments by expression, just like we forward dots. Lionel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel