Yes, it also occurs with WriteXLS version 3.2.1.
This test on several computers always leads to the same error.
Hugo Varet
2013/8/17 Rainer Hurling
> Am 13.08.2013 19:40, schrieb Hugo Varet:
> > Dear R users,
> >
> > I've just updated the WriteXLS package (on R 3.0.1) and I now have an
> err
Dear listers,
I am running some OLS on multiply imputed data using Amelia.
I first imputed the data with Amelia.
than I run a OLS using Zelig to obtain a table of results accounting for
the multiply imputed data-sets. And I'd like to do this for various models.
Finally, I want to output all the mo
At 04:13 17/08/2013, Eunice Chou wrote:
My outcome variable (y) is 3 categories. Is there anything bad about using
the following code to get a parameter estimate for my bivariate model?
publicfit = glm(y ~ public, data=dataSPSS.vmj, family=binomial)
Have you tried it? What did R tell you?
Thanks for the input, but it looks like I found a simple solution.
Turns out that if you assign to lists by name, then R doesn't make extra
copies:
> x<-double(10^9)
> mylist<-list()
> system.time(mylist[[1]]<-x)
user system elapsed
2.992 3.352 6.364
> x<-double(10^9)
> mylist<-list(
Hi Christopher,
thanks for your reply. Unfortunately, that's not what I am looking for. I
would like to have a table with the results of the two models (lm.imputed1
and lm.imputed2) in two separate columns.
According to stargazer syntax I should type something like:
stargazer(lm.imputed1, lm.impute
Does this do what you want?
library(Amelia)
library(Zelig)
library(stargazer)
library(xtable)
data(africa)
m = 10
imp1 <- amelia(x = africa,cs="country",m=m)
imp2 <- amelia(x = africa,cs="country",m=m)
lm.imputed1 <- zelig(infl ~ trade + civlib, model="ls",data = imp1)
lm.imputed2 <- zelig(infl
Le vendredi 16 août 2013 à 19:35 -0700, Ajinkya Kale a écrit :
> I am trying to use the text mining package ... I keep getting this error :
>
> rm(list=ls())
> library(tm)
> sourceDir <- "Z:\\projectk_viz\\docs_to_index"
> ovid <- Corpus(DirSource(sourceDir),readerControl = list(language = "lat"))
What do you mean by results? Do you want just the estimated parameters? And
are you looking for one big table with all the estimated parameters from
all imputation runs?
Chris
On Sat, Aug 17, 2013 at 11:18 AM, Francesco Sarracino wrote:
> Hi Christopher,
> thanks for your reply. Unfortunately,
Oh and are you looking for just the summarized results over all the imputed
runs? i thought you wanted them from each iteration.
On Sat, Aug 17, 2013 at 11:38 AM, Christopher Desjardins <
cddesjard...@gmail.com> wrote:
> What do you mean by results? Do you want just the estimated parameters?
>
Bill I found a workaround:
f <- ff(formula, lab)
f <- as.formula(gsub("`", "", as.character(deparse(f
Thanks for your elegant solution.
Frank
--
Thanks Bill. The problem is one of the results of convertName might be
'Heading("Age in Years")*age' (this is fo
It contains all text files which were converted from doc, docx, ppt etc.
using libreoffice.
Some of them are non-english text documents.
Sorry I cannot share the corpus.. but if someone can shed light on what
might cause this error then I can try to eliminate those documents if some
specific docs
HI everyone,
I have encountered a problem while using ncdf to open nc files in R. I
found in the internet several comments in the past but no solution.
I could not find a direct solution but I found the source of the
problem, if anyone may know where the solution could be, and an
indirect
Hi Camilo,
you don't say what platform you are running on, but the version of the
underlying netcdf library installed on your machine must have been properly
compiled to enable access to files greater than 2GB. Otherwise, the R
interface to the netcdf library cannot work with such files. You migh
Funny, it works fine if I use VectorSource
ovid <- Corpus(VectorSource(list.files(sourceDir)[1:1253]), readerControl =
list(language = "lat"))
So I tried only executing > DirDource(sourceDir) and that fails with the
error i mentioned earlier. So its not a problem with Corpus() which I
thought initi
I think I know why it works faster, cause VectorSource in above code only
takes the files names as a corpus and not the contents of the files :D duh!
Any suggestions to create a vector source out of contents of the txt files ?
On Sat, Aug 17, 2013 at 1:59 PM, Ajinkya Kale wrote:
> Funny, it wo
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained,
Has there been any systematic evaluation of which core R functions are safe for
use with multicore? Of current interest, I have tried calling read.table() via
mclapply() to more quickly read in hundreds of raw data files (I have a 24 core
system with 72 GB running Ubuntu, a perfect platform for
Hi dear R-users,
I encountered an interesting pattern. Take for example the function
combn(), I copied and pasted the function definition and saved it as a new
function named combn2() (see the end of this email). As it turned out,
combn2() seems to be substantially slower than the original functio
On 18.08.2013 01:05, Xiao He wrote:
Hi dear R-users,
I encountered an interesting pattern. Take for example the function
combn(), I copied and pasted the function definition and saved it as a new
function named combn2() (see the end of this email). As it turned out,
combn2() seems to be substa
In most threaded multitasking environments it is not safe to perform IO in
multiple threads. In general you will have difficulty performing IO in parallel
processing so it is best to let the master hand out data to worker tasks and
gather results from them for storage. Keep in mind that just bec
If you have properly installed the plantbreeding package, then each time
you start R, you need to type:
library(plantbreeding)
before you can access the data or the functions in the package.
Kevin
On Sat, Aug 17, 2013 at 12:18 AM, Waqas Shafqat wrote:
> -- Forwarded message --
Please read the details offered in the resources below.
On Aug 17, 2013, at 2:27 PM, Tony Paredes wrote:
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do
Sent from my T-Mobile 4G LTE Device
Original message
From: David Winsemius
Date: 08/17/2013 9:39 PM (GMT-07:00)
To: Tony Paredes
Cc: r-help@r-project.org
Subject: Re: [R] remove from list
Please read the details offered in the resources below.
On Aug 17, 2013, at
Hello R users,
I have recently begun a project to analyze a large data set of approximately
1.5 million rows it also has 9 columns. My objective consists of locating
particular subsets within this data ie. take all rows with the same column 9
and perform a function on that subset. It was sugg
It would be helpful if
- you give us some sample data:
dput( head( myData ) )
- tell us what kind of function you want to apply, or
how the result looks like that you want to achieve
- show us what you have done so far,
and where you are stuck
On Saturday 17 August 2013 19:33:08 Dyl
Hi,
In addition to Rainer's suggestion (which are to give an small example
of what your input data look like and an example of what you want to
output), given the size of your input data, you might want to try to
use the data.table package instead of plyr::ddply -- especially while
you are explori
26 matches
Mail list logo