Hi
when trying your code I got
pval = summary(model)$coeff[2,4]
Error in summary(model)$coeff[2, 4] : subscript out of bounds
> str(summary(model)$coeff)
num [1, 1:4] 1.73e-17 7.07e-01 2.44e-17 1.00
- attr(*, "dimnames")=List of 2
..$ : chr "(Intercept)"
..$ : chr [1:4] "Estimate" "Std. Er
I suppose I could calculate the eigenvectors directly and not worry about
centering the time-series, since they essentially the same range to begin with:
vec <- eigen(cor(cbind(d1,d2,d3,d4)))$vector
cp <- cbind(d1,d2,d3,d4)%*%vec
cp1 <- cp[,1]
I guess there is no way to reconstruct the original
R Help -
I'd like to identify each correlation value in the dataframe below
above/below .3/-.3 in order to graph the original data points. I've started
with the call below to identify each value by it's row and column. I'd like
to form a data object that identifies each set of variables that meet
Hello,
I'm new to R. I'm trying to run a power analysis in a for loop to find an
ideal sample size. The situation is I am doing counts of fish at 24
different sites and rating those sites as 1 or 2 for good or poor habitat.
So the counts are continuous but the habitat rating isn't. When I try to
ru
Well, at least the immediate cause is clear:
>> list(log(ArKm00,10)=xx)
is invalid syntax. If you want a list element _named_ log(ArKm00,10), you'll
need to quote the name. However, it's not going to work anyway, because that
isn't what predict() expects. You don't supply logged variables, you
I think you want to convert your principal component to the same scale as d1,
d2, d3, and d4. But the "original space" is a 4-dimensional space in which d1,
d2, d3, and d4 are the axes, each with its own mean and standard deviation.
Here are a couple of possibilities
# plot original values for
You don't detail how you detect that 'there are no duplicates', and
also provide no proof of this. It's not the usual sense of
duplicated(), it needs to be that there are no date-times within a
burst (a single animal's trip/journey/trajectory). Can you try this on
your datetime and burst id and rep
On Oct 2, 2014, at 2:29 PM, Jonathan Thayn wrote:
> Hi Don. I would like to "de-rotate� the first component back to its original
> state so that it aligns with the original time-series. My goal is to create a
> �cleaned�, or a �model� time-series from which noise has been removed.
Please cc
Thanks, Dagmar.
So, shouldn't row 3 with a time of 09:51:01 be "low" and not "high"?
Jean
On Thu, Oct 2, 2014 at 4:25 PM, Dagmar wrote:
> Dear Jean and all,
>
> I want all lines to be "low", but during times 9:55 - 10:05 a.m (i.e. a
> timespan of 10 min) I want them to be "high".
> In my real
Dear Jean and all,
I want all lines to be "low", but during times 9:55 - 10:05 a.m (i.e. a
timespan of 10 min) I want them to be "high".
In my real data "low" and "high" refer to "lowtide" and "hightide" in
the waddensea and I want to assign the location of my animal at the time
it was taken to
You will have better luck getting replies to your post if you provide code
that we can run. In other words, provide some small example data instead
of referring to your data frame that we have no access to. You can use the
output from dput() to provide a subset of your data frame to the list.
dp
Dagmar,
Can you explain more fully why rows 1, 2, and 5 in your result are "low"
and rows 3 and 4 are "high"? It is not clear to me from the information
you have provided.
> result[c(1, 2, 5), ]
Timestamp location Event
1 24.09.2012 09:05:011 low
2 24.09.2012 09:49:50
On 03/10/14 03:54, eliza botto wrote:
Dear UseRs,
I obtained following results from Anderson-Darling Goodness of fit test.
dput(EB)
structure(c(2.911, 0.9329, 0.818, 1.539, 0.604, 0.5142, 0.4344, 0.801, 0.963, 0.9925, 0.933, 0.956, 0.883, 0.572), .Dim = c(7L, 2L), .Dimnames = list(c("EXP",
"GU
On Oct 2, 2014, at 12:18 PM, Jonathan Thayn wrote:
> I have four time-series of similar data. I would like to combine these into
> a single, clean time-series. I could simply find the mean of each time
> period, but I think that using principal components analysis should extract
> the most s
I have four time-series of similar data. I would like to combine these into a
single, clean time-series. I could simply find the mean of each time period,
but I think that using principal components analysis should extract the most
salient pattern and ignore some of the noise. I can compute com
Jim, Thanks for the comment about else!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www
Hello R Help Group:
I have been struggling to create
an object of class ltraj with the function as.ltraj (adehabitatLT) with my bird
data. Regarding my data structure, I
have GPS for 10 birds that transmit three times/day, over the course of a year
(with missing data). I have a L10.csv
file with
On 02/10/2014 16:40, Alexsandro Cândido de Oliveira Silva wrote:
I have a list (temp.data) with many raster data and some computations are in
parallel. n is the number of raster data and target is the mask. I'd like to
use a progress bar. It is created but while the loop is running the progress
i
It depends quite a bit on where you are coming from.
If you come from the mathematical side of things, i.e. you're not scared at the
thought of multiplying matrices, know a good deal about statistics and have
some experience with programming languages, the document by Venables and Smith
"An In
Thank you Duncan!
Dan
-Original Message-
From: Duncan Murdoch [mailto:murdoch.dun...@gmail.com]
Sent: Monday, September 29, 2014 6:23 PM
To: Lopez, Dan; R help (r-help@r-project.org)
Subject: Re: [R] Custom Function Not Completely working
On 29/09/2014, 9:07 PM, Lopez, Dan wrote:
> Hi
Hi ,
Is there a way to order clusters in heatmap.2 ?
Best Regards,
Asmaa
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-proj
Hi,
I am trying to do maximum likelihood estimation on a univariate structural
model with diffuse components in dlm.
The package already has an MLE function, but I would like to implement two
enhancements, both of which are discussed in Harvey's Forecasting structural
time series models and
Hello,
I am plotting glms with logged predictors. I would like to define the
logged variables "on the fly" rather than hard coding them into my
dataframe.
This code (with hard-coded logged variables) works just fine:
xx<-seq(-2,.5,by=0.1); lines(xx,predict(power,list(LogArKm00=xx),type=
"respons
I have a list (temp.data) with many raster data and some computations are in
parallel. n is the number of raster data and target is the mask. I'd like to
use a progress bar. It is created but while the loop is running the progress
is not showed. The loop ends and the progress bar is closed. I've tr
Andre
For the last time, there is NO simple rational approximation to the
quantiles of the t-distribution.
From R help, qt = TINV is based on
Hill, G. W. (1970) Algorithm 396: Student's t-quantiles. Communications of
the ACM, 13(10), 619–620. And
Hill, G. W. (1981) Remark on Algorithm 396, AC
Dear UseRs,
I obtained following results from Anderson-Darling Goodness of fit test.
> dput(EB)
structure(c(2.911, 0.9329, 0.818, 1.539, 0.604, 0.5142, 0.4344, 0.801, 0.963,
0.9925, 0.933, 0.956, 0.883, 0.572), .Dim = c(7L, 2L), .Dimnames =
list(c("EXP", "GUM", "GENLOG", "GENPARETO", "GEV", "LN",
Folks,
I have the following data:
mdf<-structure(list(a = 1:3, b = c(10, 20, 30)), .Names = c("a", "b"
), row.names = c(NA, -3L), class = "data.frame")
And function:
defCurveBreak<-function(x, y) {
cumsum(rep(diff(c(0, x)), each = y)/y)
}
lapply'ing to get the result "foo"
foo<-data.frame(l
Dear all,
I am trying to create an accumulation curve for kernel density estimation (KDE)
of home range size, with kernel density in the y-axis and number of telemetry
fixes in the x-axis. I am using "kernelUD" and "getverticeshr" functions from
adehabitatHR package.
However, I have a problem:
Keith Jewell said:
> ... from reading ?all.equal I would have expected
> scale = 1 and the default scale = NULL to give identical results for the
> length
> one numerics being passed to all.equal.
>
> Can anyone explain?
Inspectng the code in all.equal.numeric, I find
xy <- mean((if (cplx)
Hello! I hope someone can help me. It would save me days of work. Thanks in
advance!
I have two dataframes which look like these:
myframe <- data.frame (Timestamp=c("24.09.2012 09:00:00", "24.09.2012
10:00:00",
"24.09.2012 11:00:00"), Event=c("low","high","low") )
myframe
mydata <- data.frame
On 01/10/2014 23:54, Peter Alspach wrote:
Tena koe Kate
If kateDF is a data.frame with your data, then
apply(kateDF, 1, function(x) isTRUE(all.equal(x[2], x[1], check.attributes =
FALSE, tolerance=0.1)))
comes close to (what I think) you want (but not to what you have illustrated in
your 'ev
There is an error jean, I apologize... I made changes to the vectors and did
not correct the bottom line... this is the correct run:a <-c(0,1,1,0,1,0,0,0,0)
b <-c(0,0,0,1,0,0,0,0,0)
c <-c(1,0,1,0,1,1,0,0,0)
d <-c(0,1,0,1,0,1,0,0,0)
df <-rbind(a,b,c,d)
df <-cbind(df,h=c(sum(a)*8,sum(b)*8,sum(c)*8,s
Andras,
Is there an error in your post or am I missing something?
df[, 9] is made up of the last (9th) element of each of a, b, c, and d.
The minimum value for sum(df[, 9]) is 0.
Given your conditions, there are many, many ways to get this result.
Here is just one example:
a <-c(1,1,1,1,1,0,0,0,0
> The manual formula mean that how to calculate that value by hand for
> TINV(0.408831, 1221) and the resulted is 4.0891672
This is not really an R help question - you're specifically asking for
something that _doesn't_ use R - but if you want to know how R does it the full
C and R code for
Dear All,
The manual formula mean that how to calculate that value by hand for
TINV(0.408831, 1221) and the resulted is 4.0891672
Appreciate your help in advance.
Cheers!
On Wed, Oct 1, 2014 at 9:15 PM, wrote:
> What do you mean by a "manual" formula?
>
>
> *Andre >*
>
> 09/30/2014 11:5
> > Is there an easy way to check whether a variable is within +/- 10%
> > range of another variable in R?
You could use
2*abs(A-B)/(A+B) < 0.1
which avoids an apply().
I've assumed you meant different by under 10% of the mean of the two, hence the
2/(A+B); if you meant 10% of something else, s
OK I get it, all options work well now.
Thank you!
-Original Message-
From: William Dunlap [mailto:wdun...@tibco.com]
Sent: 01 October 2014 19:05
To: Ingrid Charvet
Cc: r-help@r-project.org
Subject: Re: [R] Print list to text file with list elements names
You want to put
lapply(myList
On 02-10-2014, at 11:01, r...@openmailbox.org wrote:
> Subscribers,
>
> What is the correct syntax to apply the 'if else' conditional statement to
> vector objects?
>
> Example:
>
> vectorx<-c(50,50,20,70)
> vectory<-c(50,50,20,20)
> vectorz<-function () {
> if (vectorx>vectory)
>
Hi
in this case correct syntax is not to use if at all
vectorx*(vectorx>vectory)
1] 0 0 0 70
if you insist you can use
?ifelse
which is "vectorised" if
Regards
Petr
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of r..
Subscribers,
What is the correct syntax to apply the 'if else' conditional statement
to vector objects?
Example:
vectorx<-c(50,50,20,70)
vectory<-c(50,50,20,20)
vectorz<-function () {
if (vectorx>vectory)
vectorx
else vectorx<-0
}
vectorz()
Warning message:
In if (vec
40 matches
Mail list logo