> "k" == kate <[EMAIL PROTECTED]>
> on Thu, 8 May 2008 10:45:04 -0500 writes:
k> In my data, sample mean =-0.3 and the histogram looks like t
distribution;
k> therefore, I thought non-central t distribution may be a good fit.
Anyway, I
k> try t distribution to get MLE
Thanks a lot Deepayan. Could you please inform me what update are you
referring to, and give me some very vague sense when it might happen (within
weeks, months, or years)?
Many thanks
Ola
2008/5/8 Deepayan Sarkar <[EMAIL PROTECTED]>:
> On 5/8/08, Ola Caster <[EMAIL PROTECTED]> wrote:
> > Dear h
Hi all,
I have a data management question. I am using an panel dataset read into
R as a dataframe, call it "ex". The variables in "ex" are: id year x
id: a character string which identifies the unit
year: identifies the time period
x: the variable of interest (which might contain NAs).
Here
interesting request..I'm looking forward to the replies
All I could come up with is putting it in two lines..
pr<-array(0,c(dim(x)[2],dim(x)[2]));
for (i in 1:dim(x)[2]) for (j in 1:dim(x)[2])
pr[i,j]<-cor.test(x[,i],x[,j])$p.val;
y
Monica Pisica wrote:
>
>
> Hi everybody,
>
> I would lik
Le jeu. 8 mai à 17:20, Giuseppe Paleologo a écrit :
I am struggling with R code optimization, a recurrent topic on this
list.
I have three arrays, say A, B and C, all having the same number of
columns.
I need to compute an array D whose generic element is
D[i, j, k] <- sum_n A[i, n]*B[j,
Ahh, thanks, that helped! Is the standard error being calculated by 1/sqrt(N-3)
though? I ask because visually inspecting the plots of the 6 CCFs I've done,
only 4 have a C.I. line that look "about right" according to my own
calculations using this formula. The other 2 are a little below my calc
Hello,
I have some photosynthesis data that is grouped by SPECIES and REPLICATION
I have tried to develop a grouped data set with the following:
photo.gd <- groupedData(Photo~PARi|Species,
data = photo.frm,
labels = list(x = "PPFD (expression(paste)µ mol photons m^-2 s^-1)",
After a bit more searching, I've discovered that this chart is a variant of
the treemap or map of the market. I'll play around with the sample code
posted here https://stat.ethz.ch/pipermail/r-sig-finance/2006q2/000880.html,
but if anyone's taken that further, I'd be keen to know. I'm happy to use
On May 8, 2008, at 9:11 PM, Sean Carmody wrote:
Does anyone have any ideas about how you could use R to produce a
fancy area
plot like this one in the NY Times? http://tinyurl.com/6rr22g
I certainly hope not, I wouldn't want my favorite statistics program
to produce an area graph where the
Does anyone have any ideas about how you could use R to produce a fancy area
plot like this one in the NY Times? http://tinyurl.com/6rr22g
Regards,
Sean,
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz
Hi all,
Why not only?
set.seed(54321)
X <- 5*rnorm(500)
hist(X,label=TRUE,ylim=c(0,200))
Thanks,
Jorge
On Thu, May 8, 2008 at 8:26 PM, Ted Harding <[EMAIL PROTECTED]>
wrote:
> On 09-May-08 00:12:46, David Scott wrote:
> > On Thu, 8 May 2008, Roslina Zakaria wrote:
> >
> >> Dear R-expert,
>
Hi Jack,
Is this what you want?
barplot(table( female_familar$gender,
female_familar$familar),beside=TRUE,col=c(4,5),
ylim=c(0,20),xlab="Familiarity", ylab="Participants in the survey")
legend("topleft",c('Females','Males'),col=c(4,5),pch=15)
or
barplot(table( female_familar$gender,
female_fami
On 09-May-08 00:12:46, David Scott wrote:
> On Thu, 8 May 2008, Roslina Zakaria wrote:
>
>> Dear R-expert,
>> For histogram function, can we get the table of bin and
>> frequency like in excel, together with the histogram?
>> Therefore, we can check the number of data included.
>> Thank you so muc
On Thu, 8 May 2008, Roslina Zakaria wrote:
Dear R-expert,
For histogram function, can we get the table of bin and frequency like in
excel, together with the histogram?
Therefore, we can check the number of data included.
Thank you so much for your attention and help.
Easy one: just use the a
On 9/05/2008, at 12:04 PM, Roslina Zakaria wrote:
Dear R-expert,
For histogram function, can we get the table of bin and frequency
like in excel, together with the histogram?
Therefore, we can check the number of data included.
Thank you so much for your attention and help.
?hist
Dear R-expert,
For histogram function, can we get the table of bin and frequency like in
excel, together with the histogram?
Therefore, we can check the number of data included.
Thank you so much for your attention and help.
I'd like to thank Paul Johnson and Achim Zeileis heartily
for their thorough and accurate responses to my query.
I think that the details og how to use the procedure, and
of its variants, which they have sent to the list should
be definitive -- and very helpfully usable -- for folks
like myself wh
Hello,
I need some help with the simulatedSNPs function from scrime package.
I am trying to simulate some genotype of a case/control disease locus. The
allele frequence are cases/controls
Sample cases controls
2000 .5.10
1500 .6.40
In each of th
delpacho wrote:
Hi everybody,
my goal is to display symbols on the x-axis of a barplot.
I read some mathematics strings in a file and convert them to an expression
as follows:
tt<- scan(file = fstr ,'what' ='character', sep = "");
for (iaa in 1:length(tt)) {
tt[iaa]<-do.call(expression, lappl
Hi everybody,
my goal is to display symbols on the x-axis of a barplot.
I read some mathematics strings in a file and convert them to an expression
as follows:
tt<- scan(file = fstr ,'what' ='character', sep = "");
for (iaa in 1:length(tt)) {
tt[iaa]<-do.call(expression, lapply(tt[iaa], as.nam
Hello,
I have a question regarding the boot function. I am non-parametrically
bootstrapping a function that I've written to estimate survival for
animals following some closed form estimators (i.e. no optimization
needed). I'm using the boot function for this and it performs well when
using inpu
QRMlib has routines for fitting t distributions. Have a look at that
package. Also sn has routines for skew-t distributions
David Scott
On Thu, 8 May 2008, kate wrote:
I have a data with 236 observations. After plotting the histogram, I found that
it looks like non-central t distribution
Once again, Paul, many thanks for your thorough examination
of this question! And for spelling out your approach!!!
It certainly looks as though you're very close to target
(or even spot-on).
I've only one comment -- see at end.
On 08-May-08 20:35:38, Paul Johnson wrote:
> Ted Harding said:
>> I
Paul & Ted:
> > I can get the estimated RRs from
>
> > RRs <- exp(summary(GLM)$coef[,1])
>
> > but do not see how to implement confidence intervals based
> > on "robust error variances" using the output in GLM.
>
>
> Thanks for the link to the data. Here's my best guess. If you use
> the follow
Disclaimer: I don't use R on Windows much myself. But: I can't help
noticing such threads occurring once in a while on r-help.
Why not simply define some folder to host one's personal library
*outside* of the main R library? As explained in ?Startup, one can put
something like
R_L
Peter,
Are you running the NWS server on the same machine as the R session (ie
the machine running 'twistd -y /etc/nws.tac')?
Pat
Peter Tait wrote:
Hi,
I am using caretNWS on a RHEL x86_64 system and I am getting an error
message that is nearly identical to the one occuring in
http://www.r-p
is it possible that regsubsets divides the subset dataset into training and
testing set.. so when it calculates R^2 it's not the same as what you'd get
with lm. You probably need to look into the source code to know if this is
true or not
Edwards, David J wrote:
>
> Hi,
>
> I'm new to R and
I am struggling with R code optimization, a recurrent topic on this list.
I have three arrays, say A, B and C, all having the same number of columns.
I need to compute an array D whose generic element is
D[i, j, k] <- sum_n A[i, n]*B[j, n]*C[k, n]
Cycling over the three indices and subsetting th
from the upper menu, go to edit/gui preferences
then you can increase the font size. i used windows so im not sure if this
works on mac.
thanks
juanita choo wrote:
>
> Hi,
>
> I am having a situation where I cannot change the output size of the R
> console. I have played around with the font
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Ted Harding said:
> I can get the estimated RRs from
> RRs <- exp(summary(GLM)$coef[,1])
> but do not see how to implement confidence intervals based
> on "robust error variances" using the output in GLM.
Thanks for the link to the data. Here's my best guess. If you use
the following approac
On 9/05/2008, at 6:51 AM, Jill E. List wrote:
Hi R users:
Does any one know about a R library to deal with
***PACKAGE*** not ``library''
intervention/impact analysis in time series (eg. Box-Tiao et. al.
theory?).
##
On 5/8/08, Ola Caster <[EMAIL PROTECTED]> wrote:
> Dear help list,
>
> Is it possible to draw lattice histograms (i.e. use the histogram() function
> and not the hist() function) with objects of class "Date"?
Sort of. The default calculation of 'breaks' doesn't work, so
histogram(~date, data=my
Hey,
I'm trying to generate a heat map of 30,000 fragments from probably 5-10
samples. Windows complains about memory shortage. Should I resort to Unix
system?
Also, if I only plot 1000 fragments out, they can finish it rather fast.
5000 would take more than 10 minutes. I don't know what to expect
Hello,
I have subjects in 4 groups: X1, X2, X3, X4. There are 33 subjects in
group X1, 35 in X2, 31 in X3, and 46 in group X4. I have 7 continuous
response variables (actually integers, approximately normal) measured
for each subject: Y1 to Y7, and two continuous covariates C1, C2 (they
are b
Hi R users:
Does any one know about a R library to deal with
intervention/impact analysis in time series (eg. Box-Tiao et. al. theory?).
Thank you for your help.
Jill List
(408)892-5742
[[alternative HTML version deleted]]
__
R-help@r-p
To make your example reproducible you have to provide the data somehow;
I am pretty sure nprint doesn't effect the solution, but if it does this
would be a bug and I would appreciate a reproducible report.
The example in nls.lm is a little complicated in order to show how to use
an analytical expr
Hi everybody,
I would like to apply cor.test to a matrix with m rows and n columns and get
the results in a list of matrices , one matrix for p.val, one for the
statistic, one for the correlation and 2 for upper and lower confidence
intervals, something analog with cor() applied to a matrix.
When I upgrade in Windows from say 2.6.2 tp 2.7.0 I do the following
1. Install 2.7.0 in a new directory
2 Rename the library subdirectory in the new version from library to library2
3 Copy the library subdirectory in 2.6.2 to 2.7.0
4 Copy the contents of library2 to the transferred library. T
Hi,
I am using caretNWS on a RHEL x86_64 system and I am getting an error
message that is nearly identical to the one occuring in
http://www.r-project.org/nosvn/R.check/r-release-macosx-ix86/caretNWS-00check.txt
Error in socketConnection(serverHost, port = port, open = "a+b", blocking =
TRUE)
On 08-May-08 18:23:27, E C wrote:
> Hi everyone,
>
> When the CCF between two series of observations is plotted in R, a line
> indicating (presumably) the significance threshold appears across the
> plot. Does anyone know how this threshold is determined (it is
> different for each set of series)
Ravi,
if you have a large data.frame you might want to have a look at the count.rows
function I collected from older threads and put into the wiki
(http://wiki.r-project.org/rwiki/doku.php?id=tips:data-frames:count_and_extract_unique_rows)
WIth table I run into memory trouble - just as with ag
Will this do it for you:
> x <- readLines(textConnection("1
+ Pietje
+ I1 I2 Value
+ 1 1 0.11
+ 1 2 0.12
+ 2 1 0.21
+
+ 2
+ Jantje
+ I1 I2 I3 Value
+ 1 1 1 0.111
+ 3 3 3 0.333"))
> closeAllConnections()
> start <- grep("^[[:digit:]]+$", x)
> mark <- vector('integer', length(x))
> mark[
Hi everyone,
When the CCF between two series of observations is plotted in R, a line
indicating (presumably) the significance threshold appears across the plot.
Does anyone know how this threshold is determined (it is different for each set
of series) and how its value can be extracted from R?
on 05/08/2008 12:33 PM statmobile wrote:
Hey All,
I was wondering if I could solicit a little input on what I'm trying
to do here. I have a list of matrices, and I want to set their
dimnames, but all I can come up with is this:
x <- matrix(1:4,2) y <- matrix(5:8,2)
z <- list(x,y) nm <- c("a",
I think what you want is this -- you have to return 'x' from the lapply:
x <- matrix(1:4,2)
y <- matrix(5:8,2)
z <- list(x,y)
nm <- c("a","b")
nms <- list(nm,nm)
z <- lapply(z,function(x){
dimnames(x)<-nms
x
})
On Thu, May 8, 2008 at 1:33 PM, statmobile <[EMAIL PROTECTED]> wrote:
> Hey
On Thu, 8 May 2008, Kittler, Richard wrote:
> Is it possible to use some form of robust regression with the
> breakpoints routine so that it is less sensitive to outliers?
Conceptually, it is possible to use the underlying dynamic programming
algorithm for other objective functions than the resid
Another way is using straightforward indexing:
> x <- cbind(trips=c(1,3,2), y=1:3, z=4:6)
> x
trips y z
[1,] 1 1 4
[2,] 3 2 5
[3,] 2 3 6
> # generate row indices with the appropriate
> # number of repeats
> ii <- rep(seq(len=nrow(x)), x[,1])
[1] 1 2 2 2 3 3
> # use these indices
Hi,
I am having a situation where I cannot change the output size of the R
console. I have played around with the font format menu but the changes are
only reflected to the script that I type in but not to the output. Everytime
I run a script, I have to go back to font format to increase the outp
Hey All,
I was wondering if I could solicit a little input on what I'm trying to
do here. I have a list of matrices, and I want to set their dimnames,
but all I can come up with is this:
x <- matrix(1:4,2)
y <- matrix(5:8,2)
z <- list(x,y)
nm <- c("a","b")
nms <- list(nm,nm)
z <- lapply(z,
Is it possible to use some form of robust regression with the
breakpoints routine so that it is less sensitive to outliers?
--Rich
Richard Kittler
Advanced Micro Devices, Inc.
Sunnyvale, CA
__
R-help@r-project.org mailing list
https://stat.ethz.ch/
I solved the problem arising from using certain quantile values simply by
printing the iterations with the argument nprint. This gave me correct
estimates. I have no idea why.
- Original Message
From: elnano <[EMAIL PROTECTED]>
To: r-help@r-project.org
Sent: Thursday, May 8, 2008 5:43:3
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
It should be ok
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Irene Mantzouni
Sent: Thursday, May 08, 2008 5:45 PM
To: [EMAIL PROTECTED]
Subject: [R] acf function
Dear all,
I have an annual time-series of population numbers and I would like to
esti
This is exactly the problem: apps launched through the Finder do not
go through the usual shell initialization process, where the PATH is
typically set up. The two solutions would be to either use the full
path to the command, or else start R.app from the Terminal, via the
command:
open -
Hi everyone,
I am confused on how to specify some nesting and interaction terma with lme().
I have a dataset where some flies where selected for accessory gland size, made
to mate in presence/absence of another male and the level of some protein
measured. Now the complex stuff.
The selection
Hi,
Thanks for the help. I have now solved the problem of installing the old
packages in R2.7. A preliminary check showed that they seemed to work.
But I had the following problem with updating :
> update.packages(checkBuilt=TRUE, ask=FALSE)
--- Please select a CRAN mirror for use in this session
In my data, sample mean =-0.3 and the histogram looks like t distribution;
therefore, I thought non-central t distribution may be a good fit. Anyway, I
try t distribution to get MLE. I found some warnings as follows; besides, I
got three parameter estimators: m=0.23, s=4.04, df=1.66. I want to
I've basically solved the problem using the nls.lm function from the
minpack.lm (thanks Katharine) with some modifications for ignoring residuals
above a given percentile. This is to avoid the strong influence of points
which push my modeled vs. measured values away from the 1:1 line.
I based it o
On Thu, May 8, 2008 at 8:38 AM, Ted Harding
<[EMAIL PROTECTED]> wrote:
> The below is an old thread:
>
> On 02-Jun-04 10:52:29, Lutz Ph. Breitling wrote:
> > Dear all,
> >
> > i am trying to redo the 'eyestudy' analysis presented on the site
> > http://www.ats.ucla.edu/stat/stata/faq/relative_
Are you running R from the shell, or R.app?
I don't own or use a Mac, but I've seen something like this happen to
people running R through ESS on some Emacs on a Mac.
Apologies for lack of precision here in terminology, but it had
something to do with the PATH getting set through a shell init
Thanks for your quick reply.
I try the command as follows,
library(stats4) ## loading package stats4
ll <- function(change, ncp, df) {-sum(dt(x, ncp=ncp, df=df,
log=TRUE))}#-log-likelihood function
est<-mle(minuslog=ll, start=list(ncp=-0.3,df=2))
But the warnings appears as follows,
invalid
On Thu, 8 May 2008, kate wrote:
I have a data with 236 observations. After plotting the histogram, I
found that it looks like non-central t distribution. I would like to get
MLE for mu and df.
So you mean 'non-central'? See ?dt.
I found an example to find MLE for gamma distribution from "f
That's not what movedir.bat and copydir.bat in batchfiles do. They
will not overwrite any files.
On Thu, May 8, 2008 at 10:40 AM, ravi <[EMAIL PROTECTED]> wrote:
> Hi,
> Ouch! That really hurt. But I get the point.
> Here's what I did now. I copied all the package folders from R2.6, except
>
On 5/8/2008 10:34 AM, kate wrote:
I have a data with 236 observations. After plotting the histogram, I found that it looks like non-central t distribution. I would like to get MLE for mu and df.
I found an example to find MLE for gamma distribution from "fitting distributions
with R":
library
On 5/8/2008 10:40 AM, ravi wrote:
Hi,
Ouch! That really hurt. But I get the point.
Here's what I did now. I copied all the package folders from R2.6, except for the R.css file, and copied them into the R2.7 folder.
That's your problem. You've hosed the 2.7 libraries.
You need to reinstall 2.
Hi,
Ouch! That really hurt. But I get the point.
Here's what I did now. I copied all the package folders from R2.6, except for
the R.css file, and copied them into the R2.7 folder.
In the process, I overwrote the common files that came with the installation of
R2.7.
Here's the output that I obta
"Qiang Li (Jonathan)" <[EMAIL PROTECTED]> wrote in message
news:<[EMAIL PROTECTED]>...
> Hi friends on R list,
>
> Have people tried to implement a hashmap in R? What is the generic way to
> implement a lookup table in R?
Does this help?
> x <- rnorm(4)
> names(x) <- c("a", "b", "c", "d")
> x
I have a data with 236 observations. After plotting the histogram, I found that
it looks like non-central t distribution. I would like to get MLE for mu and
df.
I found an example to find MLE for gamma distribution from "fitting
distributions with R":
library(stats4) ## loading package stats4
On 8 May 2008 at 14:58, Creighton, Sean wrote:
| Hello
|
| I have a string which contains microseconds, can anyone help on
| constructing this in to a time object, with the microseconds, that I can
| take to a ZOO file?
Easy, just read the docs:
i) you need %0S instead of %S to parse sub-seco
Dear help list,
Is it possible to draw lattice histograms (i.e. use the histogram() function
and not the hist() function) with objects of class "Date"?
I've tried solutions like
histogram(~date, data=my.data, breaks="months")
but it doesn't seem to work.
Any suggestions are welcome.
Many than
I don't see microseconds here, only milliseconds. See ?strptime for how
to handle this via %OS.
On Thu, 8 May 2008, Creighton, Sean wrote:
Hello
I have a string which contains microseconds, can anyone help on
constructing this in to a time object, with the microseconds, that I can
take to a
Hello
I have a string which contains microseconds, can anyone help on
constructing this in to a time object, with the microseconds, that I can
take to a ZOO file?
Thanks
Sean
> UK[1,3]
[1] "17:09:53.824"
> UK[1,1]
[1] "2007-12-11 00:00:00"
> mydates <- paste( substr(UK[,1], 1, 10), UK[,3])
I have dissolved oxygen traces that are continuous (fifteen minutes) for
two years (save for a couple of days, weeks, or minutes there depending on
the perogative of the river). These traces are spaced out by river mile. I
have figured out how to prepare data as to the sunspot example, but I can
Christoph Heibl wrote:
> Dear list,
>
> I want to run latex from an R script:
>
> system("latex mysource.tex")
>
> or:
>
> texi2dvi("mysource.tex", pdf = TRUE, clean = FALSE, quiet
> = TRUE, texi2dvi = latex)
>
> but latex does not seem to be on the search path:
>
> /bin/sh: lin
Use blzpack, it could work it out.
Aimin
At 02:44 AM 5/8/2008, Uwe Ligges wrote:
>kayj wrote:
>>Hi,
>>
>>I tried to run SVD on a 500,000* 500,000 matrix and i get a message that it
>>can not allocate a vector of length 270 mb
>
>
>Well, you will obviously need >> 1Tera(!)bytes of RAM just in
Dear all,
I have an annual time-series of population numbers and I would like to
estimate the auto-correlation. Can I use acf() function and judge
whether auto-correlation is significant by the plots? The acf array
produced by this functions gives the auto-correlation at lags 1, 2
Is that
quote the variable name or index the results from anova as a dataframe
using [. Someone (Prof. Ripley IIRC, apologies if I got this wrong) once
told me that backticks ` are the preferred, portable way of doing this,
but in this case " quotes work as well.
> example(anova.lm) ## produces fit
> tmp
The below is an old thread:
On 02-Jun-04 10:52:29, Lutz Ph. Breitling wrote:
> Dear all,
>
> i am trying to redo the 'eyestudy' analysis presented on the site
> http://www.ats.ucla.edu/stat/stata/faq/relative_risk.htm
> with R (1.9.0), with special interest in the section on "relative
> risk esti
Hi Paticia,
Perhaps ?scale is what you are looking for:
X=matrix(1:9,ncol=3)
[,1] [,2] [,3]
[1,]147
[2,]258
[3,]369
scale(X)
[,1] [,2] [,3]
[1,] -1 -1 -1
[2,]000
[3,]111
attr(,"scaled:center")
[1] 2 5 8
attr(,"scaled:scale")
[1] 1
I agree with You that I'm trying to estimate structure that is not there, it's
just an example, anyway I mean like function ar {stats} not arima0(). Function
ar() is very simple with an automatic selection criterion (AIC etc). but miss
moving average, integration, seasonality and predict method.
--- begin included message -
I am using coxph with weights to represent sampling fraction of subjects.
Our simulation results show that the robust SE of beta systematically
under-estimate the empirical SD of beta.
Does anyone know how the robust SE are estimated in coxph using weights?
Is t
Daniel Brewer wrote:
I would like to do some power estimations for a log-rank two sample test
and cpower seems to fit the bill. I am getting confused though by the
man page and what the arguments actually mean. I am also not sure
whether cpower takes into account censoring or not.
Could anyone
On 5/8/2008 8:30 AM, ravi wrote:
I know that it would be best if I reproduced the exact error messages, but I have tried so many different things now
> and have lost track of the exact error messages at each stage.
This is not a reasonable request. Rather than trying those two methods
one mor
You want to set the "names" attribute of your results vector. You can do
this with the names() function (see ?names). Specifically, you might use
something like this:
results <- c(se, upper, lower, cv)
names(results) <- c("se", "upper", "lower", "cv")
Good luck,
Ian
Stropharia wr
You can just go for a small change in the function as,
results <- cbind(se, upper, lower, cv)
OR
results <- data.frame(se, upper, lower, cv)
-S-
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Stropharia
Sent: Thursday, May 08, 2008 5:51 PM
To: r-help@r-p
On Thu, 8 May 2008, Daniele Amberti wrote:
Here is my problem:
Autoregressive models are very interesting in forecasting consumptions (eg
water, gas etc).
Generally time series of this type have a long history with relatively simple
patterns and can be useful to add external regressors for ca
Hi,
There is a lot of information at the link shown below. But it is not easy to
know where to look. Also, not easy to interpret correctly the directions.
I followed the following two methods, both of which failed for me.
(1) I first installed R2.7. I then followed the directions in, "What's the
Dear R Users,
I have written a function that returns 4 variables. I would like to have the
variables returned with their variable names, is this possible?
- R Code -
mc.error <- function(T, p=0.05){
se <- sqrt((p)*(1-(p))/T) # standard
Dear all,
I have an annual time-series of population numbers and I would like to
estimate the auto-correlation. Can I use acf() function and judge
whether auto-correlation is significant by the plots? The acf array, eg:
Autocorrelations of series 'x$log.s.r', by lag
0 1
Hi R,
Was just checking the "?ar ", the autoregressive model. It's great that
R can give the order of the autoregressive model. Suppose if I had 2000
observations to fit an AR model. Then, if I am correct R builds 33+1
autoregressive models (10*log10(2000)=33) and select the order at which
the
Hello.
I'm trying to compare a value in a 2-D matrix with its 4 neighbours.
I think I could use row() and col() and use x[row(x)] and x[row(x)-1] etc...,
but I can't see how it would work.
Also, any way of having a matrix fill itself with its own coordinates would
also work.
Any ideas please
Dear R-users,
I have output files having a variable number of tables
in the following format:
-
1
Pietje
I1 I2 Value
1 1 0.11
1 2 0.12
2 1 0.21
2
Jantje
I1 I2 I3 Value
1 1 1 0.111
3 3 3 0.333
...
-
Would there be an easy way
of turning this into (a list of) da
Dear list,
I want to run latex from an R script:
system("latex mysource.tex")
or:
texi2dvi("mysource.tex", pdf = TRUE, clean = FALSE, quiet = TRUE,
texi2dvi = latex)
but latex does not seem to be on the search path:
/bin/sh: line 1: latex: command not found.
Although '
Hi,
There are many ways to do that.
An example:
require(tcltk)
tt <- tktoplevel()
te <- tkentry(tt)
tl <- tklabel(tt)
tb <- tkbutton(tt)
tkconfigure(tl, text = 'Enter text')
tkconfigure(tb, text = 'Show', command = function()
{cat(as.character(tkget(te)))})
tkgrid(tl, row = 0, column = 0, st
Hi everyone,
Is there any function to standardize a matrix. For sure it must, but i can't
find it. For standardize, i just mean, to make the mean as zero and standard
deviation as one.It is also call z-score.
Thanks in advance
Hi,
I created some bar charts. My first one is concerned with males, and my
second concerned with females.
Is there a way I can put the charts into one chart? There is 2 different
columns in each file. Here is my new file containing males and females:
gender,familar
Female,Yes
Female,Yes
Female,
Here is my problem:
Autoregressive models are very interesting in forecasting consumptions (eg
water, gas etc).
Generally time series of this type have a long history with relatively simple
patterns and can be useful to add external regressors for calendar events
(holydays, vacations etc).
ari
I would like to do some power estimations for a log-rank two sample test
and cpower seems to fit the bill. I am getting confused though by the
man page and what the arguments actually mean. I am also not sure
whether cpower takes into account censoring or not.
Could anyone provide a simple exampl
1 - 100 of 106 matches
Mail list logo