Re: [Rd] R 3.1.1 and 3.1.2 both fail their test suites
> Duncan Murdoch > on Sat, 1 Nov 2014 13:17:56 -0400 writes: > On 01/11/2014, 11:33 AM, Peter Simons wrote: >> Hi Uwe, >> >> > Nobody in R core runs NixOS and can reproduce >> this. This passes on most > other platforms, >> apparently. If you can point us to a problem or send > >> patches, we'd appreciate it. >> >> have tried running the test suite in a build that's >> configured with '--without-recommended-packages'? That's >> about the only unusual thing we do when building with >> Nix. Other than that, our build runs on a perfectly >> ordinary Linux -- and it used to succeed fine in earlier >> versions of R. > The tests "make check-devel" and "make check-all" are > documented to require the recommended packages, and will > fail without them. On Windows, "make check" also needs > them, so this may be true on other systems as well. Thank you Duncan, for clarifying (above and later in the thread). Would it be hard to strive for 1) 'make check' should pass without-rec 2) 'make check-devel' etc do require the recommended packages. That would be ideal I think - and correspond to the fact that we call the recommended packages 'recommended' only. OTOH, if '1)' is too much work for us, we could add this as a 'wishlist' item and wait for someone to send patches.. Martin __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] R 3.1.1 and 3.1.2 both fail their test suites
On 03/11/2014, 4:17 AM, Martin Maechler wrote: >> Duncan Murdoch >> on Sat, 1 Nov 2014 13:17:56 -0400 writes: > > > On 01/11/2014, 11:33 AM, Peter Simons wrote: > >> Hi Uwe, > >> > >> > Nobody in R core runs NixOS and can reproduce > >> this. This passes on most > other platforms, > >> apparently. If you can point us to a problem or send > > >> patches, we'd appreciate it. > >> > >> have tried running the test suite in a build that's > >> configured with '--without-recommended-packages'? That's > >> about the only unusual thing we do when building with > >> Nix. Other than that, our build runs on a perfectly > >> ordinary Linux -- and it used to succeed fine in earlier > >> versions of R. > > > The tests "make check-devel" and "make check-all" are > > documented to require the recommended packages, and will > > fail without them. On Windows, "make check" also needs > > them, so this may be true on other systems as well. > > Thank you Duncan, for clarifying (above and later in the thread). > > Would it be hard to strive for > > 1) 'make check' should pass without-rec > 2) 'make check-devel' etc do require the recommended packages. > > That would be ideal I think - and correspond to the fact that > we call the recommended packages 'recommended' only. I think we could avoid errors in make check, but not warnings. People need to understand what the tests are testing, and recognize that some warnings are ignorable. To do this, we'd need to make sure that no examples in base packages require the use of recommended packages. Currently the failure happens in capture.output, because it runs the glm example which needs MASS. (The glm example is marked not to need MASS during testing, but the capture.output example runs everything.) Fixing that one causes the error to happen later. > OTOH, if '1)' is too much work for us, we could add this as a > 'wishlist' item and wait for someone to send patches.. Alternatively, we could require the recommended packages for all tests. Duncan Murdoch Duncan Murdoch __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Holding a large number of SEXPs in C++
On Nov 2, 2014, at 10:55 PM, Simon Knapp wrote: > Thanks Simon and sorry for taking so long to give this a go. I had thought of > pair lists but got confused about how to protect the top level object only, > as it seems that appending requires creating a new "top-level object". The > following example seems to work (full example at > https://gist.github.com/Sleepingwell/8588c5ee844ce0242d05). Is this the way > you would do it (or at least 'a correct' way)? > You can simply append to a pairlist, so you only need to protect the head. Also note that R_NilValue is a constant (in R sense, not C sense) so it doesn't need protection. I would write a generic pairlist builder something like that: SEXP head = R_NilValue, tail; void append(SEXP x) { if (head == R_NilValue) R_PreserveObject(head = tail = CONS(x, R_NilValue)); else tail = SETCDR(tail, CONS(x, R_NilValue)); } void destroy() { if (head != R_NilValue) R_ReleaseObject(head); } Cheers, Simon > > > struct PolyHolder { > PolyHolder(void) { > PROTECT_WITH_INDEX(currentRegion = R_NilValue, &icr); > PROTECT_WITH_INDEX(regions = R_NilValue, &ir); > } > > ~PolyHolder(void) { > UNPROTECT(2); > } > > void notifyEndRegion(void) { > REPROTECT(regions = CONS(makePolygonsFromPairList(currentRegion), > regions), ir); > REPROTECT(currentRegion = R_NilValue, icr); > } > > template > void addSubPolygon(Iter b, Iter e) { > REPROTECT(currentRegion = CONS(makePolygon(b, e), currentRegion), > icr); > } > > SEXP getPolygons(void) { > return regions; > } > > private: > PROTECT_INDEX > ir, > icr; > > SEXP > currentRegion, > regions; > }; > > > > Thanks again, > Simon Knapp > > > > CONS(newPoly, creates a new object > On Sat, Oct 18, 2014 at 2:10 AM, Simon Urbanek > wrote: > > On Oct 17, 2014, at 7:31 AM, Simon Knapp wrote: > > > Background: > > I have an algorithm which produces a large number of small polygons (of the > > spatial kind) which I would like to use within R using objects from sp. I > > can't predict the exact number of polygons a-priori, the polygons will be > > grouped into regions, and each region will be filled sequentially, so an > > appropriate C++ 'framework' (for the point of illustration) might be: > > > > typedef std::pair Point; > > typedef std::vector Polygon; > > typedef std::vector Polygons; > > typedef std::vector Regions; > > > > struct Holder { > >void notifyNewRegion(void) const { > >regions.push_back(Polygons()); > >} > > > >template > >void addSubPoly(Iter b, Iter e) { > >regions.back().push_back(Polygon(b, e)); > >} > > > > private: > >Regions regions; > > }; > > > > where the reference_type of Iter is convertible to Point. In practice I use > > pointers in a couple of places to avoid resizing in push_back becoming too > > expensive. > > > > To construct the corresponding sp::Polygon, sp::Polygons and > > sp::SpatialPolygons at the end of the algorithm, I iterate over the result > > turning each Polygon into a two column matrix and calling the C functions > > corresponding to the 'constructors' for these objects. > > > > This is all working fine, but I could cut my memory consumption in half if > > I could construct the sp::Polygon objects in addSubPoly, and the > > sp::Polygons objects in notifyNewRegion. My vector typedefs would then all > > be: > > > > typedef std::vector > > > > > > > > > > Question: > > What I'm not sure about (and finally my question) is: I will have datasets > > where I have more than 10,000 SEXPs in the Polygon and Polygons objects for > > a single region, and possibly more than 10,000 regions, so how do I PROTECT > > all those SEXPs (noting that the protection stack is limited to 10,000 and > > bearing in mind that I don't know how many there will be before I start)? > > > > I am also interested in this just out of general curiosity. > > > > > > > > > > Thoughts: > > > > 1) I could create an environment and store the objects themselves in there > > while keeping pointers in the vectors, but am not sure if this would be > > that efficient (guidance would be appreciated), or > > > > 2) Just keep them in R vectors and grow these myself (as push_back is doing > > for me in the above), but that sounds like a pain and I'm not sure if the > > objects or just the pointers would be copied when I reassigned things > > (guidance would be appreciated again). Bare in mind that I keep pointers in > > the vectors, but omitted that for the sake of clarity. > > > > > > > > > > Is there some other R type that would be suited to this, or a general > > approach? > > > > Lists in R (LISTSXP aka pairlists) are suited to appending (since that is > fast and trivial) and sequential processing. The only issue is that pairlists > are slow for random access. If you only want to load the polygon
[Rd] Unexplicable difference between 2 R installations regarding reading numbers
Dear all, A colleague of mine reported a problem that I fail to understand completely. He has a number of .csv files that look all very straightforward, and they all read in perfectly well using read.csv() on both his and my computer. When we try the exact same R version on the university server however, suddenly all numeric variables turn into factors. The problem is resolved by deleting the last digits of every number in the .csv file. Using as.numeric() on the values works as well. Anybody a clue as to what might cause this problem? If needed, I can send an example of a .csv file. Example output on server: > X <- read.csv("Originelen/Originelen/heavymetals.csv") > levels(X[[2]]) [1] "11.140969600635804" "11.548972671055257" "11.98554898321271" [4] "16.317868213178677" "17.179218967921898" "18.596573461949852" [7] "18.786014405762298" "18.87978032658098" "23.604106448719225" [10] "26.75482955698816" "27.33829851044687" "29.26619704952923" [13] "33.07842352705811" "39.296270581233884" "4.8696848424212105" [16] "5.5751725517655295" "6.0256909109049195" "9.117975845892804" [19] "9.26944194868723" > str(X) 'data.frame': 19 obs. of 18 variables: $ ID : int 1 2 3 4 5 6 7 8 9 10 ... $ Cd5 : Factor w/ 19 levels "11.140969600635804",..: 3 8 6 12 11 10 2 5 14 13 ... $ Cd20 : Factor w/ 19 levels "10.1604999",..: 2 8 10 12 5 6 18 9 11 4 ... $ Cr5 : Factor w/ 19 levels "118.43421710855425",..: 6 11 10 17 16 15 7 13 19 18 ... $ Cr20 : Factor w/ 19 levels "100.48101898101898",..: 9 15 14 17 13 11 6 16 18 12 ... $ Cu5 : Factor w/ 19 levels "101.8005401620486",..: 8 17 16 15 14 12 9 18 19 1 ... $ Cu20 : Factor w/ 19 levels "103.67346938775509",..: 11 18 19 2 16 17 14 3 4 1 ... $ Fe5 : Factor w/ 19 levels "17239.349496158833",..: 3 8 10 9 12 14 7 16 19 18 ... $ Fe20 : Factor w/ 19 levels "17701.77893264042",..: 3 14 16 18 10 15 6 17 19 13 ... $ Mn5 : Factor w/ 19 levels "440.37211163349",..: 10 14 4 5 3 17 2 7 18 6 ... $ Mn20 : Factor w/ 19 levels "375.19156134938805",..: 12 2 6 3 1 9 11 7 8 5 ... $ Ni5 : Factor w/ 19 levels "19.54255213010077",..: 4 12 8 10 11 16 6 14 19 18 ... $ Ni20 : Factor w/ 19 levels "21.295222866280234",..: 8 13 15 18 12 16 7 17 19 14 ... $ Pb5 : Factor w/ 19 levels "125.5616926977306",..: 1 11 14 9 13 8 5 12 15 16 ... $ Pb20 : Factor w/ 19 levels "106.96930306969303",..: 3 8 11 12 9 10 4 13 14 15 ... $ Zn5 : Factor w/ 19 levels "1024.909963985594",..: 17 4 7 5 8 3 18 6 9 10 ... $ Zn20 : Factor w/ 19 levels "1247.816195886593",..: 15 4 5 7 2 1 16 6 8 3 ... $ river: int 1 1 1 1 1 1 1 1 1 1 ... Using as.numeric(levels(X[[2]])) works perfectly fine though... Session info both server and my own computer : > sessionInfo() R version 3.1.0 (2014-04-10) Platform: x86_64-w64-mingw32/x64 (64-bit) locale: [1] LC_COLLATE=Dutch_Belgium.1252 LC_CTYPE=Dutch_Belgium.1252 [3] LC_MONETARY=Dutch_Belgium.1252 LC_NUMERIC=C [5] LC_TIME=Dutch_Belgium.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] tools_3.1.0 -- Joris Meys Statistical consultant Ghent University Faculty of Bioscience Engineering Department of Mathematical Modelling, Statistics and Bio-Informatics tel : +32 (0)9 264 61 79 joris.m...@ugent.be --- Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php [[alternative HTML version deleted]] __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Unexplicable difference between 2 R installations regarding reading numbers
R version. NEWS for 3.1.0: type.convert() (and hence by default read.table() returns a character vector or factor when representing a numeric input as a double would lose accuracy. Similarly for complex inputs. NEWS for 3.1.1: type.convert(), read.table() and similar read.*() functions get a new numerals argument, specifying how numeric input is converted when its conversion to double precision loses accuracy. The default value, allow.loss allows accuracy loss, as in R versions before 3.1.0. On Nov 3, 2014, at 10:07 AM, Joris Meys wrote: > Dear all, > > A colleague of mine reported a problem that I fail to understand > completely. He has a number of .csv files that look all very > straightforward, and they all read in perfectly well using read.csv() on > both his and my computer. > > When we try the exact same R version on the university server however, > suddenly all numeric variables turn into factors. The problem is resolved > by deleting the last digits of every number in the .csv file. Using > as.numeric() on the values works as well. > > Anybody a clue as to what might cause this problem? If needed, I can send > an example of a .csv file. > > Example output on server: > >> X <- read.csv("Originelen/Originelen/heavymetals.csv") >> levels(X[[2]]) > [1] "11.140969600635804" "11.548972671055257" "11.98554898321271" > [4] "16.317868213178677" "17.179218967921898" "18.596573461949852" > [7] "18.786014405762298" "18.87978032658098" "23.604106448719225" > [10] "26.75482955698816" "27.33829851044687" "29.26619704952923" > [13] "33.07842352705811" "39.296270581233884" "4.8696848424212105" > [16] "5.5751725517655295" "6.0256909109049195" "9.117975845892804" > [19] "9.26944194868723" >> str(X) > 'data.frame': 19 obs. of 18 variables: > $ ID : int 1 2 3 4 5 6 7 8 9 10 ... > $ Cd5 : Factor w/ 19 levels "11.140969600635804",..: 3 8 6 12 11 10 2 5 > 14 13 ... > $ Cd20 : Factor w/ 19 levels "10.1604999",..: 2 8 10 12 5 6 18 9 > 11 4 ... > $ Cr5 : Factor w/ 19 levels "118.43421710855425",..: 6 11 10 17 16 15 7 > 13 19 18 ... > $ Cr20 : Factor w/ 19 levels "100.48101898101898",..: 9 15 14 17 13 11 6 > 16 18 12 ... > $ Cu5 : Factor w/ 19 levels "101.8005401620486",..: 8 17 16 15 14 12 9 18 > 19 1 ... > $ Cu20 : Factor w/ 19 levels "103.67346938775509",..: 11 18 19 2 16 17 14 > 3 4 1 ... > $ Fe5 : Factor w/ 19 levels "17239.349496158833",..: 3 8 10 9 12 14 7 16 > 19 18 ... > $ Fe20 : Factor w/ 19 levels "17701.77893264042",..: 3 14 16 18 10 15 6 17 > 19 13 ... > $ Mn5 : Factor w/ 19 levels "440.37211163349",..: 10 14 4 5 3 17 2 7 18 6 > ... > $ Mn20 : Factor w/ 19 levels "375.19156134938805",..: 12 2 6 3 1 9 11 7 8 > 5 ... > $ Ni5 : Factor w/ 19 levels "19.54255213010077",..: 4 12 8 10 11 16 6 14 > 19 18 ... > $ Ni20 : Factor w/ 19 levels "21.295222866280234",..: 8 13 15 18 12 16 7 > 17 19 14 ... > $ Pb5 : Factor w/ 19 levels "125.5616926977306",..: 1 11 14 9 13 8 5 12 > 15 16 ... > $ Pb20 : Factor w/ 19 levels "106.96930306969303",..: 3 8 11 12 9 10 4 13 > 14 15 ... > $ Zn5 : Factor w/ 19 levels "1024.909963985594",..: 17 4 7 5 8 3 18 6 9 > 10 ... > $ Zn20 : Factor w/ 19 levels "1247.816195886593",..: 15 4 5 7 2 1 16 6 8 3 > ... > $ river: int 1 1 1 1 1 1 1 1 1 1 ... > > Using as.numeric(levels(X[[2]])) works perfectly fine though... > > Session info both server and my own computer : > >> sessionInfo() > R version 3.1.0 (2014-04-10) > Platform: x86_64-w64-mingw32/x64 (64-bit) > > locale: > [1] LC_COLLATE=Dutch_Belgium.1252 LC_CTYPE=Dutch_Belgium.1252 > [3] LC_MONETARY=Dutch_Belgium.1252 LC_NUMERIC=C > [5] LC_TIME=Dutch_Belgium.1252 > > attached base packages: > [1] stats graphics grDevices utils datasets methods base > > loaded via a namespace (and not attached): > [1] tools_3.1.0 > > -- > Joris Meys > Statistical consultant > > Ghent University > Faculty of Bioscience Engineering > Department of Mathematical Modelling, Statistics and Bio-Informatics > > tel : +32 (0)9 264 61 79 > joris.m...@ugent.be > --- > Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php > > [[alternative HTML version deleted]] > > __ > R-devel@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel > __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Unexplicable difference between 2 R installations regarding reading numbers
...and apparently I have 3.1.1 installed here, instead of 3.1.0 like on the server. That illustrates very nicely the lack of coffee I experienced on this monday. Thank you! On Mon, Nov 3, 2014 at 4:41 PM, Simon Urbanek wrote: > R version. > > NEWS for 3.1.0: > > type.convert() (and hence by default > read.table() returns a character vector or factor when > representing a numeric input as a double would lose accuracy. > Similarly for complex inputs. > > NEWS for 3.1.1: > > type.convert(), read.table() and similar > read.*() functions get a new numerals argument, > specifying how numeric input is converted when its conversion to > double precision loses accuracy. The default value, > allow.loss allows accuracy loss, as in R versions before > 3.1.0. > > > On Nov 3, 2014, at 10:07 AM, Joris Meys wrote: > > > Dear all, > > > > A colleague of mine reported a problem that I fail to understand > > completely. He has a number of .csv files that look all very > > straightforward, and they all read in perfectly well using read.csv() on > > both his and my computer. > > > > When we try the exact same R version on the university server however, > > suddenly all numeric variables turn into factors. The problem is resolved > > by deleting the last digits of every number in the .csv file. Using > > as.numeric() on the values works as well. > > > > Anybody a clue as to what might cause this problem? If needed, I can send > > an example of a .csv file. > > > > Example output on server: > > > >> X <- read.csv("Originelen/Originelen/heavymetals.csv") > >> levels(X[[2]]) > > [1] "11.140969600635804" "11.548972671055257" "11.98554898321271" > > [4] "16.317868213178677" "17.179218967921898" "18.596573461949852" > > [7] "18.786014405762298" "18.87978032658098" "23.604106448719225" > > [10] "26.75482955698816" "27.33829851044687" "29.26619704952923" > > [13] "33.07842352705811" "39.296270581233884" "4.8696848424212105" > > [16] "5.5751725517655295" "6.0256909109049195" "9.117975845892804" > > [19] "9.26944194868723" > >> str(X) > > 'data.frame': 19 obs. of 18 variables: > > $ ID : int 1 2 3 4 5 6 7 8 9 10 ... > > $ Cd5 : Factor w/ 19 levels "11.140969600635804",..: 3 8 6 12 11 10 2 5 > > 14 13 ... > > $ Cd20 : Factor w/ 19 levels "10.1604999",..: 2 8 10 12 5 6 18 9 > > 11 4 ... > > $ Cr5 : Factor w/ 19 levels "118.43421710855425",..: 6 11 10 17 16 15 7 > > 13 19 18 ... > > $ Cr20 : Factor w/ 19 levels "100.48101898101898",..: 9 15 14 17 13 11 6 > > 16 18 12 ... > > $ Cu5 : Factor w/ 19 levels "101.8005401620486",..: 8 17 16 15 14 12 9 > 18 > > 19 1 ... > > $ Cu20 : Factor w/ 19 levels "103.67346938775509",..: 11 18 19 2 16 17 14 > > 3 4 1 ... > > $ Fe5 : Factor w/ 19 levels "17239.349496158833",..: 3 8 10 9 12 14 7 16 > > 19 18 ... > > $ Fe20 : Factor w/ 19 levels "17701.77893264042",..: 3 14 16 18 10 15 6 > 17 > > 19 13 ... > > $ Mn5 : Factor w/ 19 levels "440.37211163349",..: 10 14 4 5 3 17 2 7 18 > 6 > > ... > > $ Mn20 : Factor w/ 19 levels "375.19156134938805",..: 12 2 6 3 1 9 11 7 8 > > 5 ... > > $ Ni5 : Factor w/ 19 levels "19.54255213010077",..: 4 12 8 10 11 16 6 14 > > 19 18 ... > > $ Ni20 : Factor w/ 19 levels "21.295222866280234",..: 8 13 15 18 12 16 7 > > 17 19 14 ... > > $ Pb5 : Factor w/ 19 levels "125.5616926977306",..: 1 11 14 9 13 8 5 12 > > 15 16 ... > > $ Pb20 : Factor w/ 19 levels "106.96930306969303",..: 3 8 11 12 9 10 4 13 > > 14 15 ... > > $ Zn5 : Factor w/ 19 levels "1024.909963985594",..: 17 4 7 5 8 3 18 6 9 > > 10 ... > > $ Zn20 : Factor w/ 19 levels "1247.816195886593",..: 15 4 5 7 2 1 16 6 8 > 3 > > ... > > $ river: int 1 1 1 1 1 1 1 1 1 1 ... > > > > Using as.numeric(levels(X[[2]])) works perfectly fine though... > > > > Session info both server and my own computer : > > > >> sessionInfo() > > R version 3.1.0 (2014-04-10) > > Platform: x86_64-w64-mingw32/x64 (64-bit) > > > > locale: > > [1] LC_COLLATE=Dutch_Belgium.1252 LC_CTYPE=Dutch_Belgium.1252 > > [3] LC_MONETARY=Dutch_Belgium.1252 LC_NUMERIC=C > > [5] LC_TIME=Dutch_Belgium.1252 > > > > attached base packages: > > [1] stats graphics grDevices utils datasets methods base > > > > loaded via a namespace (and not attached): > > [1] tools_3.1.0 > > > > -- > > Joris Meys > > Statistical consultant > > > > Ghent University > > Faculty of Bioscience Engineering > > Department of Mathematical Modelling, Statistics and Bio-Informatics > > > > tel : +32 (0)9 264 61 79 > > joris.m...@ugent.be > > --- > > Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php > > > > [[alternative HTML version deleted]] > > > > __ > > R-devel@r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel > > > > -- Joris Meys Statistical consultant Ghent University Faculty of Bioscience Engineering Department of Mathematical Modelling, Statistics and Bio-Informatics tel :
[Rd] Pkg creation: Sweave: multiple files vignette: Error in R CMD check
Hello R-developers! I am creating a package (using devtools and RStudio) and I would like to split my vignette into multiple Rnw-files. As an example I tried the code from: https://support.rstudio.com/hc/en-us/articles/200486298 (--> Working with multiple Rnw files) The Rnw-files work fine with "Complie pdf" in RStudio as well as with Sweave("Master.Rnw"). But, if I try to check my package I get the following error: ... * creating vignettes ... ERROR Error in texi2dvi(file = file, pdf = TRUE, clean = clean, quiet = quiet, : Ausführen von 'texi2dvi' für 'ChapterY.tex' fehlgeschlagen. LaTeX errors: ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H for immediate help. ... ! Emergency stop. <*> ...13 \let~\normaltilde \input ./ChapterY.tex *** (job aborted, no legal \end found) ! ==> Fatal error occurred, no output PDF file produced! Calls: -> texi2pdf -> texi2dvi Ausführung angehalten Fehler: Command failed (1) Ausführung angehalten Exited with status 1. ### So, it seems like that it is tried to make a tex-file from my child-Rnw-file called "ChapterY.Rnw", what of couse is not possible, because that file contains no praeambel. As a workaround I tried to put my child-Rnw-file in a subfolder (ChapterY) and calling this file by \SweaveInput{ChapterY/ChapterY.Rnw}. Again, "Complie pdf" as well as Sweave("Master.Rnw") works fine, but with checking the package I get the following error: Error in SweaveReadFile(c(ifile, file), syntax, encoding = encoding) : no Sweave file with name ‘./ChapterY/ChapterY.Rnw’ found ERROR: installing vignettes failed By the way I tried that on different (L)ubuntu machines (12.04, 14) which the latest version of RStudio and R, and I also tried it after updating texlive to version 2012, always getting the same error. Moreover, if I just use one Rnw-file instaed of multiple files checking the package finished fine without errors! I do not know what to do anymore, and I did not find any solution in the www ... Do someone has any idea how to solve this problem in any way? Thank you very very much in advance! Greetings! Roland ### R version 3.1.2 (2014-10-31) Platform: i686-pc-linux-gnu (32-bit) locale: [1] LC_CTYPE=de_DE.UTF-8 LC_NUMERIC=C LC_TIME=de_DE.UTF-8 LC_COLLATE=de_DE.UTF-8 [5] LC_MONETARY=de_DE.UTF-8LC_MESSAGES=de_DE.UTF-8 LC_PAPER=de_DE.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] trajcoert01_0.1 loaded via a namespace (and not attached): [1] devtools_1.6 geosphere_1.3-8grid_3.1.2 intervals_0.14.0 lattice_0.20-29 [6] move_1.2.475 raster_2.2-31 rgdal_0.8-16 rgeos_0.3-4 sp_1.0-15 [11] spacetime_1.1-2tools_3.1.2trajectories_0.1-1 xts_0.9-7 zoo_1.7-11 ### __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Holding a large number of SEXPs in C++
Thanks again Simon. I had realised that R_NilValue didn't need protection... I just thought it a clean way to make my initial call to PROTECT_WITH_INDEX (which I can see now was not required since I didn't need the calls to REPROTECT)... and I had not thought of appending to the tail. One final question (and hopefully I don't get to badly burnt) I cannot find R_PreserveObject/R_ReleaseObject or SETCDR mentioned in "Writing R Extensions". Is there anywhere for a novice like myself to find a 'complete' reference to Rs useful macros and functions, or do I just have to read more source? Thanks again for being so awesome, Simon On Tue, Nov 4, 2014 at 12:47 AM, Simon Urbanek wrote: > > On Nov 2, 2014, at 10:55 PM, Simon Knapp wrote: > > > Thanks Simon and sorry for taking so long to give this a go. I had > thought of pair lists but got confused about how to protect the top level > object only, as it seems that appending requires creating a new "top-level > object". The following example seems to work (full example at > https://gist.github.com/Sleepingwell/8588c5ee844ce0242d05). Is this the > way you would do it (or at least 'a correct' way)? > > > > You can simply append to a pairlist, so you only need to protect the head. > Also note that R_NilValue is a constant (in R sense, not C sense) so it > doesn't need protection. I would write a generic pairlist builder something > like that: > > SEXP head = R_NilValue, tail; > > void append(SEXP x) { > if (head == R_NilValue) > R_PreserveObject(head = tail = CONS(x, R_NilValue)); > else > tail = SETCDR(tail, CONS(x, R_NilValue)); > } > > void destroy() { >if (head != R_NilValue) > R_ReleaseObject(head); > } > > Cheers, > Simon > > > > > > > > struct PolyHolder { > > PolyHolder(void) { > > PROTECT_WITH_INDEX(currentRegion = R_NilValue, &icr); > > PROTECT_WITH_INDEX(regions = R_NilValue, &ir); > > } > > > > ~PolyHolder(void) { > > UNPROTECT(2); > > } > > > > void notifyEndRegion(void) { > > REPROTECT(regions = > CONS(makePolygonsFromPairList(currentRegion), regions), ir); > > REPROTECT(currentRegion = R_NilValue, icr); > > } > > > > template > > void addSubPolygon(Iter b, Iter e) { > > REPROTECT(currentRegion = CONS(makePolygon(b, e), > currentRegion), icr); > > } > > > > SEXP getPolygons(void) { > > return regions; > > } > > > > private: > > PROTECT_INDEX > > ir, > > icr; > > > > SEXP > > currentRegion, > > regions; > > }; > > > > > > > > Thanks again, > > Simon Knapp > > > > > > > > CONS(newPoly, creates a new object > > On Sat, Oct 18, 2014 at 2:10 AM, Simon Urbanek < > simon.urba...@r-project.org> wrote: > > > > On Oct 17, 2014, at 7:31 AM, Simon Knapp wrote: > > > > > Background: > > > I have an algorithm which produces a large number of small polygons > (of the > > > spatial kind) which I would like to use within R using objects from > sp. I > > > can't predict the exact number of polygons a-priori, the polygons will > be > > > grouped into regions, and each region will be filled sequentially, so > an > > > appropriate C++ 'framework' (for the point of illustration) might be: > > > > > > typedef std::pair Point; > > > typedef std::vector Polygon; > > > typedef std::vector Polygons; > > > typedef std::vector Regions; > > > > > > struct Holder { > > >void notifyNewRegion(void) const { > > >regions.push_back(Polygons()); > > >} > > > > > >template > > >void addSubPoly(Iter b, Iter e) { > > >regions.back().push_back(Polygon(b, e)); > > >} > > > > > > private: > > >Regions regions; > > > }; > > > > > > where the reference_type of Iter is convertible to Point. In practice > I use > > > pointers in a couple of places to avoid resizing in push_back becoming > too > > > expensive. > > > > > > To construct the corresponding sp::Polygon, sp::Polygons and > > > sp::SpatialPolygons at the end of the algorithm, I iterate over the > result > > > turning each Polygon into a two column matrix and calling the C > functions > > > corresponding to the 'constructors' for these objects. > > > > > > This is all working fine, but I could cut my memory consumption in > half if > > > I could construct the sp::Polygon objects in addSubPoly, and the > > > sp::Polygons objects in notifyNewRegion. My vector typedefs would then > all > > > be: > > > > > > typedef std::vector > > > > > > > > > > > > > > > Question: > > > What I'm not sure about (and finally my question) is: I will have > datasets > > > where I have more than 10,000 SEXPs in the Polygon and Polygons > objects for > > > a single region, and possibly more than 10,000 regions, so how do I > PROTECT > > > all those SEXPs (noting that the protection stack is limited to 10,000 > and > > > bearing in mind that I don't know how many there will be before I > start)? > > > > > > I am also interested in this just out of genera
Re: [Rd] Holding a large number of SEXPs in C++
On Nov 3, 2014, at 5:34 PM, Simon Knapp wrote: > Thanks again Simon. I had realised that R_NilValue didn't need protection... > I just thought it a clean way to make my initial call to PROTECT_WITH_INDEX > (which I can see now was not required since I didn't need the calls to > REPROTECT)... and I had not thought of appending to the tail. > > One final question (and hopefully I don't get to badly burnt) I cannot find > R_PreserveObject/R_ReleaseObject or SETCDR mentioned in "Writing R > Extensions". Is there anywhere for a novice like myself to find a 'complete' > reference to Rs useful macros and functions, or do I just have to read more > source? > R_PreserveObject is mentioned in 5.9.1, but it's really just a fleeting mention. It's not used in typical packages, but it is used heavily whenever you're interfacing a foreign runtime system (language or library). What is or is not in the API can vary slightly depending on whom you ask, but the installed header files are essentially the candidate set. Cheers, Simon > Thanks again for being so awesome, > Simon > > On Tue, Nov 4, 2014 at 12:47 AM, Simon Urbanek > wrote: > > On Nov 2, 2014, at 10:55 PM, Simon Knapp wrote: > > > Thanks Simon and sorry for taking so long to give this a go. I had thought > > of pair lists but got confused about how to protect the top level object > > only, as it seems that appending requires creating a new "top-level > > object". The following example seems to work (full example at > > https://gist.github.com/Sleepingwell/8588c5ee844ce0242d05). Is this the way > > you would do it (or at least 'a correct' way)? > > > > You can simply append to a pairlist, so you only need to protect the head. > Also note that R_NilValue is a constant (in R sense, not C sense) so it > doesn't need protection. I would write a generic pairlist builder something > like that: > > SEXP head = R_NilValue, tail; > > void append(SEXP x) { > if (head == R_NilValue) > R_PreserveObject(head = tail = CONS(x, R_NilValue)); > else > tail = SETCDR(tail, CONS(x, R_NilValue)); > } > > void destroy() { >if (head != R_NilValue) > R_ReleaseObject(head); > } > > Cheers, > Simon > > > > > > > > struct PolyHolder { > > PolyHolder(void) { > > PROTECT_WITH_INDEX(currentRegion = R_NilValue, &icr); > > PROTECT_WITH_INDEX(regions = R_NilValue, &ir); > > } > > > > ~PolyHolder(void) { > > UNPROTECT(2); > > } > > > > void notifyEndRegion(void) { > > REPROTECT(regions = CONS(makePolygonsFromPairList(currentRegion), > > regions), ir); > > REPROTECT(currentRegion = R_NilValue, icr); > > } > > > > template > > void addSubPolygon(Iter b, Iter e) { > > REPROTECT(currentRegion = CONS(makePolygon(b, e), currentRegion), > > icr); > > } > > > > SEXP getPolygons(void) { > > return regions; > > } > > > > private: > > PROTECT_INDEX > > ir, > > icr; > > > > SEXP > > currentRegion, > > regions; > > }; > > > > > > > > Thanks again, > > Simon Knapp > > > > > > > > CONS(newPoly, creates a new object > > On Sat, Oct 18, 2014 at 2:10 AM, Simon Urbanek > > wrote: > > > > On Oct 17, 2014, at 7:31 AM, Simon Knapp wrote: > > > > > Background: > > > I have an algorithm which produces a large number of small polygons (of > > > the > > > spatial kind) which I would like to use within R using objects from sp. I > > > can't predict the exact number of polygons a-priori, the polygons will be > > > grouped into regions, and each region will be filled sequentially, so an > > > appropriate C++ 'framework' (for the point of illustration) might be: > > > > > > typedef std::pair Point; > > > typedef std::vector Polygon; > > > typedef std::vector Polygons; > > > typedef std::vector Regions; > > > > > > struct Holder { > > >void notifyNewRegion(void) const { > > >regions.push_back(Polygons()); > > >} > > > > > >template > > >void addSubPoly(Iter b, Iter e) { > > >regions.back().push_back(Polygon(b, e)); > > >} > > > > > > private: > > >Regions regions; > > > }; > > > > > > where the reference_type of Iter is convertible to Point. In practice I > > > use > > > pointers in a couple of places to avoid resizing in push_back becoming too > > > expensive. > > > > > > To construct the corresponding sp::Polygon, sp::Polygons and > > > sp::SpatialPolygons at the end of the algorithm, I iterate over the result > > > turning each Polygon into a two column matrix and calling the C functions > > > corresponding to the 'constructors' for these objects. > > > > > > This is all working fine, but I could cut my memory consumption in half if > > > I could construct the sp::Polygon objects in addSubPoly, and the > > > sp::Polygons objects in notifyNewRegion. My vector typedefs would then all > > > be: > > > > > > typedef std::vector > > > > > > > > > > > > > > > Question: > > > Wh