Hello all,
I would like to have two or maybe three extra “for”. One for changing beta_x
with values 0.00 and 0.20 and the others for changing phi_x and phi_y with
values 0.00, 0.30, 0.90.
Does anyone know how to implement this?
library(MASS)
library(forecast)
library(lmtest)
library("dyn")
lib
I would just move the row.names to a column, and use Reduce instead of
do.call. Like this:
mylist <- lapply(mylist, function(x) data.frame(row = rownames(x), x))
Reduce(function(x, y){merge(x, y, by = "row", all=TRUE)}, mylist)
Best,
Ista
On Sat, Jan 17, 2015 at 11:37 AM, Remi Genevest wrote:
>
Hello,
I have a list of dataframes with different number of rows and I want to bind
columns by rownames and put some NAs where rownames do not match.
I was thinking about doing something like this :
do.call(merge,c(mylist,by="row.names",all.x=TRUE))
but I get the following error message :
/Err
On 24/10/2014, 11:31 AM, Dan Vatnik wrote:
> Hi,
>
> I have loaded my first package to CRAN and I have noticed a documentation
> issue.
> When I install the package from CRAN, it installs a Windows binary since I
> am using a Windows machine.
> I have a PDF file in my doc folder. Originally, it wa
On 24/10/2014, 11:31 AM, Dan Vatnik wrote:
> Hi,
>
> I have loaded my first package to CRAN and I have noticed a documentation
> issue.
> When I install the package from CRAN, it installs a Windows binary since I
> am using a Windows machine.
> I have a PDF file in my doc folder. Originally, it wa
Hi,
I have loaded my first package to CRAN and I have noticed a documentation
issue.
When I install the package from CRAN, it installs a Windows binary since I
am using a Windows machine.
I have a PDF file in my doc folder. Originally, it was in the inst/doc/
folder but when I load the binary, the
I was referring to the 3rd decimal place and beyond. Thanks that did
the trick. I was trying to compare the two to make sure that I knew how
to do it by hand. Thanks for all of your help.
Stephen
On Wed 26 Oct 2011 02:23:02 PM CDT, Joshua Wiley wrote:
Hi Stephen,
Thanks for the disclosur
Hi Stephen,
Thanks for the disclosure. If you are referring to the difference in
the third decimal place between your calculated F value and what R
gives, yes, it is due to rounding. Try this:
## extract the mean squares from anova() and store in msq
msq <- anova(x.lm)[, "Mean Sq"]
mean(msq[4:
#For full disclosure- I am working on a homework problem. However, my
question revolves around computer rounding, I think.
x <- (structure(list(y = c(0.222, 0.395, 0.422, 0.437, 0.428, 0.467,
0.444, 0.378, 0.494, 0.456, 0.452, 0.112, 0.432, 0.101, 0.232,
0.306, 0.0923, 0.116, 0.0764, 0.439, 0.
what are you going to do with the data? If just for presentation, then keep as
character. If you are going to compute on the data, then keep as numeric.
Since you are using floating point, FAQ 7.31 reminds you that the data "is
kept" as inputted to the best that can be done with 54 bits of pr
Thanks for the quick response.
Read the FAQ. If i want to keep the values in R the same as when inputed
should i be converting the data to a different type - i.e. Not numeric?
Sent from my iPhone
On Oct 11, 2011, at 4:46 AM, Jim Holtman wrote:
> FAQ 7.31
>
> Sent from my iPad
>
> On Oct
FAQ 7.31
Sent from my iPad
On Oct 11, 2011, at 1:07, Mark Harrison wrote:
> I am having a problem with extra digits being added to my data which I think
> is a result of how I am converting my data.frame data to xts.
>
> I see the same issue in R v2.13.1 and RStudio version 0.94.106.
>
> I am
I am having a problem with extra digits being added to my data which I think
is a result of how I am converting my data.frame data to xts.
I see the same issue in R v2.13.1 and RStudio version 0.94.106.
I am loading historical foreign exchange data in via csv files or from a sql
server database.
ould be appreciated.
Thanks,
--
View this message in context:
http://old.nabble.com/R-extra-package-problem.-tp26488926p26488926.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/
> -Original Message-
> From: Steve Lianoglou [mailto:mailinglist.honey...@gmail.com]
> Sent: Friday, August 21, 2009 9:02 AM
> To: William Dunlap
> Cc: kfcnhl; r-help@r-project.org
> Subject: Re: [R] extra .
>
> Hi,
>
> This is somehow unrelated, but yo
rsday, August 20, 2009 7:34 PM
To: r-help@r-project.org
Subject: [R] extra .
sigma0 <- sqrt((6. * var(maxima))/pi)
What does the '.' do here?
In R it does nothing: both '6' and '6.' parse as "numerics"
(in C, double precision numbers). In SV4 and S
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of kfcnhl
> Sent: Thursday, August 20, 2009 7:34 PM
> To: r-help@r-project.org
> Subject: [R] extra .
>
>
> sigma0 <- sqrt((6. * var(maxima))/pi)
It has no effect. Both 6 and 6. represent the number six
as a double:
> identical(6, 6.)
[1] TRUE
> typeof(6.)
[1] "double"
On Thu, Aug 20, 2009 at 10:34 PM, kfcnhl wrote:
>
> sigma0 <- sqrt((6. * var(maxima))/pi)
>
> What does the '.' do here?
> --
> View this message in context:
> http://www.
My guess is that 6. comes for 6.0 - something which comes from programming
languages where 6 represents 6 as integer while 6. (or 6.0) represents 6 as
floating point number.
--- On Fri, 21/8/09, kfcnhl wrote:
> From: kfcnhl
> Subject: [R] extra .
> To: r-help@r-project.org
&
sigma0 <- sqrt((6. * var(maxima))/pi)
What does the '.' do here?
--
View this message in context:
http://www.nabble.com/extra-.-tp25073255p25073255.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
htt
'The R Inferno' page 87 talks about getting
extra columns from data derived from spreadsheets.
It happens because the spreadsheet program
thinks for some reason that the extra cells are
used -- a cell was probably clicked on.
Patrick Burns
patr...@burns-stat.com
+44 (0)20 8525 0696
http://www.bur
Hello all: I'm hoping you can help me determine the source of this problem.
I've just used read.csv to bring a small (581 rows, 9 vars) dataset into R
(2.7.0., Mac OS 10.5.5). The dataset was created in Excel 2008 from a
datadump from an Oracle database. I've done this many times before and had
no
22 matches
Mail list logo