, idvar = "Subject",
timevar = "time", v.names = "conc", sep= "_")
names(wide)
There are some obvious work-arounds and alternatives, but it would be nice
to have this sorted. Can anyone help?
Bob
Bob
--
Bob O'Hara
Institutt
Rui,
Many thanks for your reply and coding, I was not
expecting so much work was required. It worked perfectly.
The only thing I needed to do, was create a Temp file in the Documents folder.
Thanks again,
Bob
At 03:52 PM 7/26/2023, Rui Barradas wrote:
Ãs 23:06 de 25/07/2023, Bob Green
> x = readtext("http://home.brisnet.org.au/~bgreen/Data/Dir/()")
Error in download_remote(file, ignore_missing, cache, verbosity) :
Remote URL does not end in known extension. Please download the
file manually.
Any suggestions are appreciated.
Bob
___
d_q=handle_url
So {httr} relies on the quintessential standard in URL escaping —
which is libcurl's — for all URL machinations.
-boB
On Wed, Feb 19, 2020 at 10:36 AM Roy Mendelssohn - NOAA Federal via
R-help wrote:
>
> Thanks. Yes. I did that, it also has a verbose mode so that I cou
te and
debug the function on its own, and then use replicate() to run the
loop (there are also functions like vapply() and apply() if you need
to pass different arguments into the function for different
iterations).
Bob
On Tue, 6 Aug 2019 at 11:28, Tolulope Adeagbo wrote:
>
> Thanks guys
top the repeat (something I never knew
existed in R!).
Bob
On Tue, 6 Aug 2019 at 10:54, Tolulope Adeagbo wrote:
>
> Hey guys,
>
> I'm trying to write a loop that will repeat an action for a stipulated
> number of times. I have written some code but i think i'm missin
irectly to
end users then I'd highly suggest seeking legal assistance from a firm that
specializes in reviewing licensing situations. They abound these days and it'll
ultimately be worth the expense (it shouldn't be too bad).
-Bob
> On Jul 24, 2019, at 6:07 PM, Andrew Robinso
ex and Maturity have the same effect in each population. If
you think their effects vary between populations, then you would want
lm (PAC ~ Season + Population*Sex + Population*Maturity)
which is the same as lm (PAC ~ Season + Population*(Sex + Maturity)).
Bob
On Tue, 2 Jul 2019 at 15:27, Elef
,K=K["k"])
fit <- auto.arima(data,, xreg =cbind(z1,z2,z3), seasonal = FALSE)
fit$aicc
}
# smaller MaxOrders used so if you run it like this, it won't take 5 hours
MaxOrders <- expand.grid(i = 1:3, j=1=7, k=1:8)
AICc <- apply(MaxOrders, FitModel, data=demand)
Bob
meric() around the right hand side. Or hope that R does type
conversion for you when you need it)
HTH
Bob
On 24 April 2018 at 09:30, Luca Meyer wrote:
> Hi,
>
> I am trying to debug the following code:
>
> for (i in 1:10){
> t <- paste("d0$V",i,sep="")
>
4000:6000 gives you 4000, 4001, ..., 6000. I suspect you want
population= c(seq(4000, 6000, length=5), seq(3500, 4300, length=5),
seq(3000, 3200, length=5))
Bob
On 20 September 2017 at 17:07, Shivi Bhatia wrote:
> Hi Team,
>
> I using the syntax as:
>
> data.df<- data.frame(
stions and even more thanks to the CRAN team for a
speedy onboarding.
-Bob
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages
__
R-help@r-project.org mailing
: look
at the anova() function.
Bob
On 29 June 2017 at 11:13, Benoît PELE wrote:
> Hello,
>
> i am a newby on R and i am trying to make a backward selection on a
> binomial-logit glm on a large dataset (69000 lines for 145 predictors).
>
> After 3 days working, the stepAIC functi
ming, and put some checks in the code, to make sure your
factors have the right number of levels.
Bob
On 9 May 2017 at 13:36, wrote:
> Hi Bob,
>
> many thanks for your reply.
>
> I have read the documentation. In my current project I use "item
> batteries" for dimensions
act". It now doesn't
care that they were 1 and 0, because you've told it to change the
labels.
If you want to filter by the original values, then don't change the
labels (or at least not until after you've filtered by the original
labels), or convert the filter to the new
s and enhancement requests are most welcome as
GitHub issues.
-Bob
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages
__
R-help@r-project.org mailing list
rs over time and among demographic groups. Their data is
usually updated monthly.
Code (with extended examples in the README) is at:
https://github.com/hrbrmstr/epidata
Issues, enhancements (etc) are — as always — welcome.
-Bob
___
R-packages mailing l
Aye, but this:
some_dates <- as.POSIXct(c("2015-12-24", "2015-12-31", "2016-01-01",
"2016-01-08"))
(year_week <- format(some_dates, "%Y-%U"))
## [1] "2015-51" "2015-52" "2016-00" "2016-01"
(year_week_day <- sprintf("%s-1", year_week))
## [1] "2015-51-1" "2015-52-1" "2016-00-1" "2016-01
Perhaps https://cran.r-project.org/web/packages/bcrypt/index.html
might be of assistance.
If not, drop a note back to the list as it'll be trivial to expand on
that to give you an R alternative to Perl.
On Mon, Nov 7, 2016 at 5:47 PM, MacQueen, Don wrote:
> I have a file containing encrypted con
> quicker than sapply(), uses less memory, gives the right results
> when given a vector of length 0, and gives an error when FUN does
> not return the specified sort of result.
>
>
> Bill Dunlap
> TIBCO Software
> wdunlap tibco.com
>
> On Mon, Oct 31, 2016 at 7:3
which(purrr::map_dbl(buylist, slot, "reqstock") > 100)
or
which(sapply(buylist, slot, "reqstock") > 100)
ought to do the trick.
On Mon, Oct 31, 2016 at 10:09 AM, Thomas Chesney
wrote:
> I have the following object
>
> setClass("buyer",
> representation(
> reqstock="numeric",
> buyout="
Can you tell us where you got the file from and perhaps even send a
link to the file? I know of at least 11 types of files that use `.bin`
as an extension which are all different types of data with different
binary formats.
On Tue, Oct 25, 2016 at 5:40 PM, Bob Rudis wrote:
> I'm afra
I'm afraid we'll need more information that that since the answer from
many folks on the list to such a generic question is going to be a
generic "yes".
What's the source of the binary files? If you know the type, there may
even be an R package for it already.
On Tue, Oct 25, 2016 at 5:28 PM, lil
I ran traceroutes & BGP traces from Marseille & Paris routers to that
CRAN IPv4 address (it's 10hrs after your mail, tho) and there's no
network errors. You can use any CRAN mirror, though. You aren't
limited to that one.
On Mon, Oct 24, 2016 at 9:49 AM, Etienne Borocco
wrote:
> I still have the
I think your tool is a bit overzealous. VirusTotal -
https://virustotal.com/en/file/5fd1b2fc5c061c0836a70cbad620893a89a27d9251358a5c42c3e49113c9456c/analysis/
&
https://virustotal.com/en/file/e133ebf5001e1e991f1f6b425adcfbab170fe3c02656e3a697a5ebea961e909c/analysis/
- shows no sign of any malware
`stringi::stri_count()`
I know that the `stringr` pkg saves some typing (it wraps the
`stringi` pkg), but you should really just use the `stringi` package.
It has many more very useful functions with not too much more typing.
On Thu, Oct 20, 2016 at 5:47 PM, Jan Kacaba wrote:
> Hello dear R-help
purrr::map(paste0(letters, collapse=""), ~purrr::map2_chr(.,
1:nchar(.), ~substr(.x, 1, .y)))[[1]]
seems to crank really fast at least on my system
what did you try that was slow?
On Wed, Oct 19, 2016 at 11:01 AM, Witold E Wolski wrote:
> Is there a build in function, which creates n suffixes o
If those are in "ndjson" files or are indeed single records, `ndjson`
functions will be a few orders of magnitude faster and will produce
perfectly "flat" data frames. It's not intended to be a replacement
for `jsonlite` (a.k.a. the quintessential JSON pkg for R) but it's
tailor made for making qui
You can do something like:
https://www.simple-talk.com/sql/performance/collecting-performance-data-into-a-sql-server-table/
and avoid the R step. Let the log perf data directly.
On Mon, Oct 17, 2016 at 6:03 AM, jim holtman wrote:
> within the VBS script you can easily access remote computers.
>
>
Having worked in big pharma for over 10 years, I'm _fairly_ certain
AstraZeneca can afford some paid R consulting.
On Fri, Oct 14, 2016 at 2:14 PM, David Winsemius wrote:
>
>> On Oct 14, 2016, at 12:05 AM, Vijayakumar, Sowmya
>> wrote:
>>
>> Hi R-Help team,
>>
>>
>> Greeting from AstraZeneca In
Ugly idea/option, but you could base64 encode the R script (solely to
avoid the need to do string quoting) and have that string in the
source of the R.net code, then pass it in to the eval portion or write
it out to a temp dir and pass that to the eval portion of the code.
That way the script is em
Thanks - strangely capabilities("ICU") is FALSE (I'm using ubuntu
16.04, and icu-devtools is installed). So I guess I'll conclude that
there's something odd, but I don't want to delve into these issues (a
new locale & new computer for me in a couple of months).
Yes, thanks. That seems to be it:
thing <- c("M1", "M2", "M.1", "M.2")
> sort(thing)
[1] "M1" "M.1" "M2" "M.2"
The only documentation I can find is from ?Comparison:
"Collation of non-letters (spaces, pu
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.1 LTS
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
LC_TIME=en_GB.UTF-8LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_GB.UTF-8LC_MESSAGES=en_US.UTF-8
LC_PAPER=en_GB.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C
Take a look at tidyr::separate()
On Fri, Oct 7, 2016 at 12:57 PM, silvia giussani
wrote:
> Hi all,
>
>
>
> could you please tell me if you find a solution to this problem (in
> Subject)?
>
>
>
> June Kim wrote:
>
>>* Hello,*
>
>>
>
>>* I use google docs' Forms to conduct surveys online. Multiple
st-mortem examination: he may be able to say
> what the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does not
> ensure that a reasonable answer can be extracted
(s/marplot/barplot)
On Wed, Oct 5, 2016 at 10:35 AM, Bob Rudis wrote:
> No need to bring in so many dependencies for a simple ggplot2 marplot:
>
> ds <- stack(ds)
> ggplot(ds[ds$values==1,], aes(ind)) + geom_bar()
>
> On Wed, Oct 5, 2016 at 10:17 AM, Thierry Onkel
errors and helped debug your issue, but it
went in flawlessly.
-Bob
On Tue, Oct 4, 2016 at 8:31 PM, Rolf Turner wrote:
> On 05/10/16 12:56, ProfJCNash wrote:
>
>> Can you build/install the source package? I had a problem once where my
>> libraries were "too recent&
efault plotting for this weather API.
https://cran.r-project.org/web/packages/darksky/index.html
Issues/enhancement requests are most welcome at each pkg's GH issues page.
-Bob
[[alternative HTML version deleted]]
___
R-packages maili
It's fairly straightforward with help from the purrr package:
library(purrr)
map_df(OB1, function(x) {
if (length(x) == 0) {
data.frame(id=NA_character_, nam=NA_character_, stringsAsFactors=FALSE)
} else {
data.frame(id=x[1], nam=names(x), stringsAsFactors=FALSE)
}
}, .id="V1")
O
The rvest/httr/curl trio can do the cookie management pretty well. Make the
initial connection via rvest::html_session() and then hopefully be able to
use other rvest function calls, but curl and httr calls will use the cached
in-memory handle info seamlessly. You'd need to store and retrieve cooki
You should probably pick a forum — here or SO :
http://stackoverflow.com/questions/39547398/faster-reading-of-binary-files-in-r
: - vs cross-post to all of them.
On Sat, Sep 17, 2016 at 11:04 AM, Ismail SEZEN
wrote:
> I noticed same issue but didnt care much :)
>
> On Sat, Sep 17, 2016, 18:01 ji
Base:
Filter(Negate(is.na), sapply(regmatches(dimInfo, regexec("HS_(.{1})",
dimInfo)), "[", 2))
Modernverse:
library(stringi)
library(purrr)
stri_match_first_regex(dimInfo, "HS_(.{1})")[,2] %>%
discard(is.na)
They both use capture groups to find the matches and return ju
pretty sure you just missed the `{` at the beginning of the `function`
definition block.
On Sun, Sep 4, 2016 at 7:38 AM, Michael Dewey
wrote:
> A useful rule is to fix the first error you understand and hope that the
> others go away.
>
> On 04/09/2016 04:05, Tamar Michaeli wrote:
>
>> Any help
if the files are supposed to be "1r.xlsx", "2r.xlsx" (etc) then you need to
ensure there's a "/" before it.
It's better to use `file.path()` to, well, build file paths since it will
help account for differences between directory separators on the various
operating systems out there.
On Wed, Aug 3
Ulrik: you can absolutely read from a URL in read.csv() with that syntax.
The error `## Error in attach(survey): object 'survey' not found` suggests
that the OP mis-typed something in the `survey` name in the assignment from
`read.csv()`.
However, the OP has quite a bit more to be concerned about
uap-r since I'd eventually like to replace
this with that.
-Bob
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages
__
R-help@r-project.org mailing lis
https://cran.rstudio.com/web/packages/abbyyR/index.html
https://github.com/greenore/ocR
https://electricarchaeology.ca/2014/07/15/doing-ocr-within-r/
that was from a Google "r ocr" search. So, yes, there are options.
On Tue, Jul 26, 2016 at 6:43 PM, Achim Zeileis wrote:
> On Wed, 27 Jul 2016,
Valid parameters for the form would be super-helpful.
On Mon, Jul 25, 2016 at 3:52 PM, Ulrik Stervbo wrote:
> Hi Christofer,
>
> If you can load all the data into R you don't need to query the website -
> you simply filter the data by your dates.
>
> I think that's the easiest solution.
>
> Best
Ideally, you would use a more functional programming approach:
minimal <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow = 0)
for (i in seq_len(rows)){
x <- rbind(x, rep(i, 10))
}
x
}
minimaly <- function(rows, cols){
x <- matrix(NA_integer_, ncol = cols, nrow = 0)
use `gsub()` after the `as.character()` conversion to remove
everything but valid numeric components from the strings.
On Wed, Jul 13, 2016 at 6:21 AM, roslinazairimah zakaria
wrote:
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
> I tried : balok <- cbind(ba
I'll dig into that (was hoping the small feature addition wld cause
enhanced feature requests :-)
On Tue, Jul 5, 2016 at 1:02 PM, John wrote:
> Thank you, David and Bert, for the info.
> Thank you, Bob, for this excellent function. Allow me to request a feature:
> You highlighted
I just added `docx_extract_all_cmnts()` (and a cpl other
comments-related things) to the dev version of `docxtractr`
(https://github.com/hrbrmstr/docxtractr). You can use
`devtools::install_github("hrbrmstr/docxtractr")` to install it.
There's an example in the help for that function.
Give it a go
you also don't need to do a merger if you use a base `geom_map()`
layer with the polygons and another using the fill (or points, lines,
etc).
On Fri, Jun 17, 2016 at 5:08 PM, MacQueen, Don wrote:
> And you can check what David and Jeff suggested like this:
>
> intersect( df$COUNTRY, world_map$reg
Did you try:
cor(mat, method="kendall", use="pairwise")
That only provides the matrix (so the equiv of the $r list component),
but that seems to be all you need.
On Fri, Jun 17, 2016 at 5:47 AM, Shane Carey wrote:
> Hi,
>
> I was hoping someone could help me. I was wondering are there any l
a few packages that also
help with reading "binary" data. But, without knowing more specifics,
that's about as much direction as any of us wld be able to give.
-Bob
On Sat, Jun 11, 2016 at 9:06 PM, Fahman Khan via R-help
wrote:
> Good Evening,
> Just started learning R and one
nsider binning data and using a discrete fill (IMO that's usually
a better choice for most choropleths).
-Bob
On Thu, Jun 2, 2016 at 5:37 AM, francesca Pancotto
wrote:
> Dear Users
> I am very new to the use of ggplot. I am supposed to make a plot of
> Italian provinces in which I ha
You can use gsub() instead of sub()
On Fri, May 27, 2016 at 11:10 AM, Jun Shen wrote:
> Dear list,
>
> Say I have a data frame
>
> test <- data.frame(C1=c('a,b,c,d'),C2=c('g,h,f'))
>
> I want to replace the commas with semicolons
>
> sub(',',';',test$C1) -> test$C1 will only replace the first com
What you are doing wrong is both trying yourself and asking others to
violate Google's Terms of Service and (amongst other things) get your
IP banned along with anyone who aids you (or worse). Please don't.
Just because something can be done does not mean it should be done.
On Tue, May 24, 2016 at
stackoverflow.com/questions/27080920/how-to-check-if-page-finished-loading-in-rselenium>
would probably also be better (waiting for a full page load signal),
but I try to not use [R]Selenium at all if it can be helped.
-Bob
On Wed, May 11, 2016 at 2:00 PM, boB Rudis wrote:
> Hey David,
>
a on to a single page but requires a server call for the next 10).
I also keep firefox scarily out-of-date (back in the 33's rev) b/c I
only use it with RSelenium (not a big fan of the browser). Let me
update to the 46-series and see if I can replicate.
-Bob
On Wed, May 11, 2016 at 1:48 PM, D
}
ret
}) -> tabs
final_dat <- bind_rows(tabs)
final_dat <- final_dat[, c(1, 2, 5, 7, 8, 13, 14)] # the cols you want
final_dat <- final_dat[complete.cases(final_dat),] # take care of NAs
remDr$quit()
Prbly good ref code to have around, but you can gra
I don't fully remember, but I doubt httr::content() ever returned a
character vector without using the `as="text"` parameter. Try
switching that line to:
html <- content(r, as="text")
On Tue, May 10, 2016 at 3:27 AM, Luca Meyer wrote:
> Hi Jim,
>
> Thank you for your suggestion. I have act
Or grab https://cran.r-project.org/web/checks/check_results.rds and
read it w/o the need for scraping.
On Sat, Apr 23, 2016 at 10:43 AM, David Winsemius
wrote:
>
>> On Apr 23, 2016, at 6:56 AM, David Winsemius wrote:
>>
>>
>>> On Apr 22, 2016, at 11:51 AM, mylistt...@gmail.com wrote:
>>>
>>> Dea
erage" color determination.
-Bob
On Sat, Apr 16, 2016 at 12:03 PM, Duncan Murdoch
wrote:
> On 16/04/2016 8:47 AM, Atte Tenkanen wrote:
>>
>> Hi,
>>
>> How would you calculate the "mean colour" of several colours, for
>> example c("#FF7C00&quo
Yes. Yes. That info is on their site. That info is on their site. They
have paid support for their customers and
non-Microsoft-R-platform-dependent packages will (most likely) still
be answered by the community.
This is just a re-branding and expansion of what was Revolution R
which has been aroun
Hey Bob,
If you're interested, I'd be glad to see what I can do to make doing
UDP comms from R accessible across platforms without the need for a
`system()` call. Mind shooting me a private e-mail to see what your
needs are so I can try to generalize a solution from them?
-Bob
On
>'It's not unlikely that you will need a copy of "Writing R Extensions" at
>hand.'
+ a few bottles of Scotch.
It might be worth approaching rOpenSci https://ropensci.org/ to take
over resurrection/maintenance of this.
But, it seems others are in your predicament:
https://www.researchgate.net/p
Will you be able to fix the issues that crop up (or even notice the
issues) for these unsupported packages? (There _is_ a reason they
aren't in CRAN anymore.) That particular one (which is, indeed,
archived in CRAN) also depends on Rstem, which is also archived on
CRAN, and now (according to CRAN)
What would cause you to think this mailing list is a free code-writing
service? Perhaps post your question on Amazon's Mechanical Turk
service?
Alternatively: purchase a license for Shiny Server Pro.
On Tue, Feb 23, 2016 at 12:45 AM, Venky wrote:
> Hi R users,
>
> Please anyone help me how to cr
I believe you mean - "tcltk2" which refers to Tcl/Tk Additions -
https://cran.r-project.org/web/packages/tcltk2/index.html
Kind regards,
Bob
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Divakar Reddy
Sent: Thursday, February 18, 2016 3:0
rtest'
In addition: Warning message:
package 'caret' was built under R version 3.2.3
Error: package or namespace load failed for 'caret'
Any thoughts on to resolve this error are greatly appreciated.
Best,
Bob
[[alternative HTML version deleted]]
__
Assuming that's qplot from ggplot2, it's trying to pass span to the
Point Geom which doesn't recognize it. I highly suggest moving away
from using qplot and working with the stat_s and geom_s directly with
ggplot().
On Sat, Jan 23, 2016 at 8:46 AM, Jeff Reichman wrote:
> R-Users
>
>
>
> Anyone se
houldn't be using readShapeSpatial anyway, as it has
>> a habit of not reading the coordinate system in the .prj file. I find
>> it much easier to use `raster::shapefile` which *does* read the
>> coordinate system *and* gives a more explicit error message for a
>> missing shap
Agreed with the others. After finding that shapefile and getting it to
work you are definitely not in the proper working directory.
On Thu, Jan 21, 2016 at 8:40 PM, David Winsemius wrote:
>
>> On Jan 21, 2016, at 4:39 PM, Amoy Yang via R-help
>> wrote:
>>
>> Any advice for the following errors?
Aye. You can make source/editor windows consume the entire area or
have them as separate windows and can define a consistent line-ending
vs platform native (I run RStudio Preview and [sometimes] dailies and
can confirm these are in there). The addition of full R
(C/C++/HTML/javascript/etc) code dia
Here you go Ista: https://atom.io/packages/repl (Atom rly isn't bad
for general purpose data sci needs, I still think RStudio is the best
environment for working with R projects).
On Thu, Jan 21, 2016 at 12:48 PM, Ista Zahn wrote:
> On Jan 21, 2016 12:01 PM, "Philippe Massicotte"
> wrote:
>>
>>
If you don't want to run RStudio, Sublime Text has both great R code
syntax highlighting/formatting and a REPL mode for an interactive
console in-editor.
Atom also has decent R support.
They both play well with "Dash" which is an alternative way (separate
app) to lookup R docs on OS X.
On Wed, J
Thanks, Thierry & Duncan. I'll go down the survival analysis route.
The data are for central American epiphytes, so not your usual
species. Visually there's definitely differences in the times of
germination, but not in eventual germination, so that's
straightforward.
Bob
On
e categories are interval
censored? Or is it easier to go straight to a full interval-censored
survival analysis?
Bob
--
Bob O'Hara
Biodiversity and Climate Research Centre
Senckenberganlage 25
D-60325 Frankfurt am Main,
Germany
Tel: +49 69 798 40226
Mobile: +49 1515 888 5440
WWW: http
getting
shoehorned into a data.frame. That happens more often than I'd like in
modern API calls (really complex/nested JSON being returned).
On Thu, Jan 14, 2016 at 3:34 AM, Martin Maechler
wrote:
>>>>>> boB Rudis
>>>>>> on Tue, 12 Jan 2016 13:51:50 -0500
I wonder if something like:
format.list <- function(x, ...) {
rep(class(x[[1]]), length(x))
}
would be sufficient? (prbly needs more 'if's though)
On Tue, Jan 12, 2016 at 12:15 PM, Jenny Bryan wrote:
> Is there a general problem with printing a data.frame when it has a
> list-column of S4 obj
I am working to understand the same issues with my datasets.
Adapting Dr Zeileis' posting I have written a humble function.
It creates a two cluster beta mix on a vector of data.
It seems to be working well on my datasets.
You are welcome to try on yours.
regards,
Bob
_
bi.modal
Do you have any code? Any more logs from the error? It's hard to help
when you've provided little more than an error message. What does the
output of:
library(tm)
docs <- c("This is a text.", "This another one.")
(vs <- VectorSource(docs))
generate?
On Wed, Dec 30, 2015 at 2:32 PM, Davi
As answered here: http://stackoverflow.com/a/1444/1457051
palette(c("red", "blue", "orange"))
par(lty=3)
plot(fit,ylim=c(0,1),xlim=c(0,2000))
though, as indicated in that post, you'll need to customize the
survrec:::plot.survfitr function to do more detailed customization.
On Sun
The figures should be saved somewhere. e.g. if you have x.Rmd, you
should have a X_files/ folder with subfolders for the figures (e.g.
X-html or X-latex). At least that's what I have.
Bob
On 20 October 2015 at 18:18, Witold E Wolski wrote:
> I am running r-markdown from r-studio and ca
I use it daily (hourly, really) on 10.11 (including the new betas). No issues.
On Mon, Oct 5, 2015 at 10:03 AM, R Martinez wrote:
> Has anyone tried to use R 3.2.2 on a Mac running OS 10.11 El Capitan? Did it
> work? Were any problems installing and running it?
>
> Thanks in advance,
>
> Raul Ma
The "
> observation_start="2015-09-01" observation_end="2015-09-01"
> units="lin" output_type="1" file_type="xml"
> order_by="observation_date" sort_order="asc" count="1" offset="0"
> limit="10">
> date="2015-09-01" value="0.46"/>
> '
>
> doc <- read_xml(txt)
> xml_attr(xml_find_all(doc, "//o
This is how (one way) in both the xml2 package and XML package:
library(xml2)
library(XML)
txt <- '
'
doc <- read_xml(txt)
xml_attr(xml_find_all(doc, "//observation"), "value")
doc1 <- xmlParse(txt)
xpathSApply(doc1, "//observation", xmlGetAttr, "value")
On Mon, Sep 21, 2015 at 2:01 PM,
Try increasing the memory for pandoc via knitr YAML options:
--
title: "TITLE"
output:
html_document:
pandoc_args: [
"+RTS", "-K64m",
"-RTS"
]
---
ref: http://stackoverflow.com/a/28015894/1457051
you can bump up those #'s IIRC, too, if they don't work at first.
On Thu, Aug
Looks like you can get what you need from
http://www.nseindia.com/homepage/Indices1.json on that page.
On Tue, Aug 25, 2015 at 2:23 PM, Bert Gunter wrote:
> This is not a simple question. The data are in an html-formatted web
> page. You must "scrape" the html for the data and read it into an R
>
Here's one way in base R:
df <- data.frame(id=c("A","A","B","B"),
first=c("BX",NA,NA,"LF"),
second=c(NA,"TD","BZ",NA),
third=c(NA,NA,"RB","BT"),
fourth=c("LG","QR",NA,NA))
new_df <- data.frame(do.call(rbind, by(df, df$id, functi
https://github.com/hadley/adv-r is how it was done.
On Thu, Aug 6, 2015 at 8:33 AM, Bert Gunter wrote:
> I would have thought that the first place to look was R Studio support
> site. You will find a lot of (Imo well done) docs there as well as links to
> Hadley's and Yihui's books and online doc
You should use ggmap::revgeocode (it calls google's api) and google
will rate-limit you. There are also packages to use HERE maps
geo/revgeo lookups
http://blog.corynissen.com/2014/10/making-r-package-to-use-here-geocode-api.html
and the geocode package has GNfindNearestAddress, so tons of options
You can go to the package directory:
cd /some/path/to/package
and do
R CMD install .
from a command-line there.
Many github-based packages are also made using RStudio and you can
just open the .Rproj file (i.e. load it into R studio) and build the
package there which will install it.
() first to see if it works. If it does then the problem is
elsewhere, so use find("niche.equivalency.test") to find out what
package that function is in, and email the maintainer of that package
for help.
Oh, and also read the posting guide:
<http://www.r-project.org/posting-guid
You might want to read Amazon's terms of service before crawling their
site:
http://www.amazon.in/gp/help/customer/display.html/ref=footer_cou/276-8549425-3823542?ie=UTF8&nodeId=200545940
On Tue, Jun 30, 2015 at 3:33 AM, Abhinaba Roy wrote:
> Hi R helpers,
>
> I want to crawl the amazon.in websi
You can do something like:
aaa <- function(data, w=w) {
if (class(w) %in% c("integer", "numeric", "double")) {
out <- mean(w)
} else {
out <- mean(data[, w])
}
return(out)
}
(there are some typos in your function you may want to double check, too)
On Tue, Jun 23, 2015 at 5:39 PM,
rid.arrange'-like
functionality for laying out multiple charts.
-Bob
___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages
__
R-help@r-project.org mailing li
This look similar to snow data I used last year:
https://github.com/hrbrmstr/snowfirst/blob/master/R/snowfirst.R
All the data worked pretty well.
On Tue, Jun 16, 2015 at 3:21 PM, jim holtman wrote:
> Here is an example of reading in the data. After that it is a data frame
> and should be able t
1 - 100 of 299 matches
Mail list logo