Hi
r-help-boun...@r-project.org napsal dne 10.11.2009 01:01:06:
> Hello,
>
>
>
> I am new to R. I often collect data at multiple sites and need to
> create separate graphs (such as scatterplots or histograms) of specific
> variables for each site. I have tried to do this by splitting the dat
Hi
You probably may use some of aggregate functions (by, tapply, aggregate)
aggregate(some.columns.of.data frame, list(SLUNCH, ETHNIC, RACE,
DIVISION), function(x) x/sum(x))
Untested on your data.
Regards
Petr
r-help-boun...@r-project.org napsal dne 10.11.2009 03:51:55:
>
> Sorry, I've bee
Hello,
I am interested in passing a command or two to R on the command
line. The desired behavior is for R to run these commands first, and then
begin an interactive session. For example:
$ R -e 'foo <- read.csv("/tmp/foo.csv")'
...which would launch R and execute that command, so when I
Dear all£¬
Could you tell me how to compute Autocorrelation function use fft?
Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http
Dear all,
Could you tell me how to create time series object,
time=c('2009-10-1','2009-10-3','2009-10-4'...)
data=c(124,231,240...)
Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/l
Hi -
Not a mission critical issue, but still highly annoying. I just upgraded R
to 2.10.0 (the binary for Ubuntu karmic) and the tab completion facility now
inserts a space after every completed term (something it didn't do in 2.9.0
or 2.9.2). It wouldn't be an issue so much if it weren't for th
Thanks as always for a very helpful response. I'm now loading a few million
rows in only a few seconds.
Cordially,
Adam Kramer
On Mon, 9 Nov 2009, Prof Brian Ripley wrote:
The R 'save' format (as used for the saved workspace .RData) is described in
the 'R Internals' manual (section 1.8). It i
Thank you very much for all your help. This helped a lot. Very
constructive input.
Sincerely,
Sergios Charntikov (Sergey), MA
Behavioral Neuropharmacology Lab
Department of Psychology
University of Nebraska-Lincoln
Lincoln, NE 68588-0308 USA
On Mon, Nov 9, 2009 at 9:53 PM, Ista Zahn wrot
On Nov 9, 2009, at 9:51 PM, agm. wrote:
Sorry, I've been trying to work around this and just got back to
check my
email.
dput wasn't working too well for me because the data set also has 450
variables and I needed more time to figure out how to properly show
you all
what you needed to k
Yes, reshaping data is straightforward in R. No need to copy/paste in
a spreadsheet.
See ?reshape and/or the melt/cast functions in the reshape package.
-Ista
On Mon, Nov 9, 2009 at 9:20 PM, Sergios (Sergey) Charntikov
wrote:
> Thank you very much. Finally got it to work. However, I had to rec
I have been trying to estimate confidence interval for the residual sigma in
a two-way nested ANOVA, using the lmer and mcmcsamp functions. However, the
MCMC confidence interval for sigma not only does not include the point
estimate for sigma estimated by lmer but is far away.
My model is an inte
Sorry, I've been trying to work around this and just got back to check my
email.
dput wasn't working too well for me because the data set also has 450
variables and I needed more time to figure out how to properly show you all
what you needed to know.
But to show you the idea, a very simple data
Why the contrast matrices are different for order and unordered factored?
On Mon, Nov 9, 2009 at 1:12 PM, Greg Snow wrote:
> Mostly it is a conceptual difference. An unordered factor is one where there
> is no inherent order to the levels, examples:
>
> Color of car
> Race
> Nationality
> Sex
>
On Nov 9, 2009, at 8:43 PM, Peng Yu wrote:
Chambers' book Statistical Models in S mentioned 'column.prods()'. But
I don't find it in R. I'm wondering if there is an equivalent in R?
??rowProds
??colProds
(They are in both fUtilities and timeSeries.)
--
David Winsemius, MD
Heritage Laborat
I'm trying to understand how to use the multicore package. In
particular, I'm trying to work out what is covered where it says
this in the help file for the parallel function:
expr: expression to evaluate (do _not_ use any on-screen devices or
GUI elements in this code)
Can a funct
Thank you very much. Finally got it to work. However, I had to recode it from:
columns: subject/treatment/DV (where all my response data was in one
DV column) to columns: subject/treatment/day1/day2/day3/ (where my
response data is now in three different columns).
Is there a way to do that witho
Grzes wrote:
Hello,
I'm looking for any manual about using Java and R for begginers. Do you know
any?
Take a look at Rserve, rJava, etc..
-cj
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the post
Chambers' book Statistical Models in S mentioned 'column.prods()'. But
I don't find it in R. I'm wondering if there is an equivalent in R?
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
On Sun, Nov 8, 2009 at 7:32 PM, John Fox wrote:
> Dear Peng,
>
> I'm tempted to try to get an entry in the fortunes package but will instead
> try to answer your questions directly:
I can not install 'fortunes'. What are the fortunes packages about?
> install.packages("fortunes", repos="http://R-
Dear Sergios,
Why don't you try what I suggested originally? Adapted to this data set,
mod <- lm(cbind(day1, day2, day3) ~ Treatment, data=Dataset)
idata <- data.frame(Day=factor(1:3))
summary(Anova(mod, idata=idata, idesign=~Day))
Peter Dalgaard also pointed toward an article that describes how
try this:
Data <- read.table(textConnection(" Alpha Beta Gamma Delta
A .1 .2 .3.4
B .2.3 .4.5
C .8 .9 .43 .13
D .13 .34 .34 .3"), header=TRUE)
closeAllConnections()
par(mfrow=c(2,2))
for (i in colnames(Data)){
plot(Data[[i]],
Dear all,
I am learning the subselect package in R, now I want to use GA to select
some potent variable, but some questions are puzzled.
what i want to resolve is that I have one column dependent y and 219
columns independent x. A total 72 observations is contained in the
dataset. I want t
Since you did not follow the posting and provide data, here is a way
that you can split by race & region and perform operations on each
subset of the data frame:
> # test data
> n <- 4500
> x <- data.frame(race=sample(c('a','b','c'), n, TRUE),
+ region=sample(1:9, n, TRUE), values=
On 09/11/2009 4:20 PM, Yihui Xie wrote:
Hi all,
It is recommended in ?'if' that we use 'else' right after '}' instead
of starting a new line, but I noticed deparse() will separate '}' and
'else' when the 'if...else' clause is used inside {...} (e.g. function
body). Here is an example:
## if/els
Tena koe Alfredo
?merge
HTH ...
Peter Alspach
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Alfredo
> Alessandrini
> Sent: Tuesday, 10 November 2009 1:16 p.m.
> To: r-help@r-project.org
> Subject: [R] aggregate data.fra
hist(distance_cm, breaks = 10)
is the **output** of a function and is not itself a function.
On the other hand,
function(distance_cm) hist(distance_cm, breaks = 10)
is a function.
On Mon, Nov 9, 2009 at 7:01 PM, Johnston, Danielle
wrote:
> Hello,
>
>
>
> I am new to R. I often collect da
If you can manage to write out your data in separate binary files, one for each
column, then another possibility is using package ff. You can link those binary
columns into R by defining an ffdf dataframe: columns are memory mapped and you
can access those parts you need - without initially impo
Hi Mike,
I tried to run my data in SPSS and it works fine without any problems,
plug in my levels, plug in my covariate (since it is all within) and
get my Mauchly Tests.
I tried to rearrange the data so it looks like this
subj/treatment/day1/day2/day3
subject treatment day1 day2 da
Hi,
I've two data.frame: ind_comp and dati_area
> ind_comp
INDEXindice
1 1 0.3081856
2 2 0.1368007
3 3 0.1290952
4 4 0.2905484
5 5 0.2686706
6 6 0.1122784
7 7 0.4493264
8 8 0.1932665
9 9 0.1982783
1011 0.3724666
> dati_area
X_COORD
Hello,
I am new to R. I often collect data at multiple sites and need to
create separate graphs (such as scatterplots or histograms) of specific
variables for each site. I have tried to do this by splitting the data
frame and then using lapply, but it seems that the graphing commands
cannot b
Thank You All Very Much :jumping:
Dimitris Rizopoulos-4 wrote:
>
> yet another solution is:
>
> vec <- c(TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE,
> FALSE)
>
> seq_len(rle(vec)$lengths[1])
>
>
> I hope it helps.
>
> Best,
> Dimitris
>
>
> Grzes wrote:
>> Hi !
>> I
Hello,
I'm looking for any manual about using Java and R for begginers. Do you know
any?
--
View this message in context:
http://old.nabble.com/Java-and-R-tp26275500p26275500.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-pr
I have three matrices with the same row and column names, but different data.
e.g.
Data
Alpha Beta Gamma Delta
A .1 .2 .3.4
B .2.3 .4.5
C .8 .9 .43 .13
D .13 .34 .34 .3
For each column, I would like to create a separate pl
Thank You All Very Much :jumping:
baptiste auguie-5 wrote:
>
> Hi,
>
> One way would be,
>
> vec[ cumsum(!vec)==0 ]
>
> HTH,
>
> baptiste
>
> 2009/11/9 Grzes :
>>
>> Hi !
>> I have a vector:
>> vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
>> and I'm looking for
I think that we probably need a sample database of your original data.
A few lines of the dataset would probably be enough as long as it was fairly
representative of the overall data set. See ?dput for a way of conveniently
supply a sample data set.
Otherwise off the top of my head, I would t
No luck as in...? What error did you encounter?
In your example data set, you only have 2 levels of each within-Ss
factor, in which case you shouldn't expect to obtain tests of
sphericity; as far as I understand it, sphericity necessarily holds
when for repeated measures with only 2 levels and tes
On Nov 9, 2009, at 5:24 PM, Hongwei Dong wrote:
Hi, R users,
I'm trying to transform a matrix A into B (see below). Anyone knows
how to
do it in R? Thanks.
Matrix A (zone to zone travel time)
zone z1 z2 z3 z1 0 2.9 4.3 z2 2.9 0 2.5 z3 4.3 2.5 0
> ztz <- read.table(textConnection(" z1
Hi all,
I hope that there might be some statistician out there to help me for a
possible explanation for the following simple question.
Y1~ lm(y~ t1 + t2 + t3 + t4 + t5,data=temp) # oridnary linear model
library(gam)
Y2~ gam(y~ lo(t1) +lo(t2) +lo(t3) +lo(t4) +lo(t5),data=temp) # additive
mode
Hi,
When i tried to merge two datasets (multiple to multiple merge), i met a
problem on how to stop a possible loop in the sampling arguments.
###My codes are as follows.###
data1<-matrix(data=c(1,1.2,1.3,"3/23/2004",1,1.5,2.3,"3/22/2004",2,0.2,3.3,"4/23/2004",3,1.5,1.3,"5/22/2004"),nrow=4,ncol=4
This is not an answer to your question, but I have used SparseM
package to represent large travel time matrices efficiently.
?as.matrix.ssr
if the traveltime matrix is symmetric.
On 9 Nov 2009, at 5:24PM, Hongwei Dong wrote:
Hi, R users,
I'm trying to transform a matrix A into B (see belo
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Hongwei Dong
> Sent: Monday, November 09, 2009 2:24 PM
> To: R-help Forum
> Subject: [R] How to transform the Matrix into the way I want it ???
>
> Hi, R users,
>
> I'm trying
Dear Daniel,
Thanks for your reply.
Elasticity (what I am looking for) is defined as: dln(x)/dln(y) = dx/dy *
y/x (in words, the derivative of ln(x) in ln(y), which is equal to the
derivative of x in y, times the ratio between y and x)
(http://en.wikipedia.org/wiki/Elasticity_(economics)). I t
Hi, R users,
I'm trying to transform a matrix A into B (see below). Anyone knows how to
do it in R? Thanks.
Matrix A (zone to zone travel time)
zone z1 z2 z3 z1 0 2.9 4.3 z2 2.9 0 2.5 z3 4.3 2.5 0
B:
from to time z1 z1 0 z1 z2 2.9 z1 z3 4.3 z2 z1 2.9 z2 z2 0 z2 z3 2.5 z3 z1
4.3 z3 z2 2.5 z
Because print.foo is not defined if you only include the function "g"
in your namespace.
remko
-
Remko Duursma
Post-Doctoral Fellow
Centre for Plants and the Environment
University of Western Sydney
Hawkesbury Campus
Richmond NSW 2753
Dept of B
My R process has been killed for a few times, although the system
administrator did not do so. It happened when R attempted to allocate
a lot of memory. I'm wondering whether R would spontaneously kill
itself if it can not allocate enough memory?
__
R-he
Sergios (Sergey) Charntikov wrote:
Based on what you suggested I did the following:
1. Dataset$Sessn <- as.factor(Dataset$Sessn)
2. mod <- lm(cbind(Sessn=="1", Sessn=="2") ~ Trtmt, data=Dataset)
3. idata <- data.frame(Sessn=factor(1:2))
4. Anova(mod, idata=idata, idesign=~Sessn))
ERROR: The er
Some variation of the following might be want you want:
df=data.frame(sex=sample(1:2,100,replace=T),snp.1=rnorm(100),snp.15=runif(100))
df$snp.1[df$snp.1>1.0]<-NA; #put some missing values into the data
x=grep('^snp',names(df)); x #which columns that begin with 'snp'
apply(df[,x],2,summary)
#or
a
Hi Joe,
You are right about the Behrens-Fisher problem. I was merely referring to
situations where the distribution of error terms is - assumed to be - known,
and not necessarily equal for all observations.
Thanks for pointing this out.
Best wishes,
Guido
--- On Mon, 9/11/09, jlu...@ria
On 5/11/2009, at 6:49 PM, Deepayan Sarkar wrote:
On Tue, Nov 3, 2009 at 3:57 PM, Rolf Turner
wrote:
(1) Is there a (simple) way of getting cloud() to do *both*
type="p" and type="h"? I.e. of getting it to plot the points
as points *and* drop a perpendicular line to the underlying plane?
Tried EZanova, no luck with my particular dataset.
Sincerely,
Sergios Charntikov (Sergey), MA
Behavioral Neuropharmacology Lab
Department of Psychology
University of Nebraska-Lincoln
Lincoln, NE 68588-0308 USA
On Mon, Nov 9, 2009 at 2:25 PM, Mike Lawrence wrote:
> Have you tried ezANOVA
You're looking for the assign() function.
See the first example in the help page for assign()
Something like
assign( paste( j,'.cd',i,'es.wash',sep='') , 1 )
instead of
names.cd[i].es.wash <- 1
paste() assembles the name as a character string, and then assign()
assigns a value to a variab
Hi all,
It is recommended in ?'if' that we use 'else' right after '}' instead
of starting a new line, but I noticed deparse() will separate '}' and
'else' when the 'if...else' clause is used inside {...} (e.g. function
body). Here is an example:
## if/else inside {}
> cat(deparse(parse(text='func
I've built a package that contains only two functions for a test run. They are:
g <- function(x){
x <- x^2
class(x) <- "foo"
x
}
print.foo <- function(x, ...){
cat("This is a test:\n")
cat(x, "\n")
invisible(x)
}
Simply testing these functi
Hello R Forum users,
I was hoping someone could help me with the following problem. Consider the
following "toy" dataset:
AccessionSNP_CRY2SNP_FLCPhenotype
1NAA0.783143079
2BQA0.881714811
3BQA0.886619488
4AQB0.416893034
5AQB
Hi,
I am trying to overlay a dendrogram on top of an image plot, but I run into
the problem of the nodes at the root of the dendrogram not aligning properly
with the columns on my image. A simple solution to do this is to use the
function heatmap which automatically plots the tree on the top bu
Indeed, that's the solution. I normally take great care in the windows
guidelines for package build. There are many nuances, and so your continued
support on this is much appreciated.
> -Original Message-
> From: Duncan Murdoch [mailto:murd...@stats.uwo.ca]
> Sent: Monday, November 09,
On 11/9/2009 1:00 PM, STEFFEN Julie wrote:
Hello,
I have a question about persp function:
I made my classical matrix with x, y and z variables and I dont know why I
obtain a 3D image with overestimate heights.
How can you tell it overestimates heights? There's no scale given.
Duncan Murdoch
On 11/9/2009 3:13 PM, Doran, Harold wrote:
I've run into a problem building a package with R-2.10.0 that I haven't
encountered with prior versions. Build and check work fine. However, I
encounter an error at the install phase indicating it cannot open a perl
script, which is below. For complet
Hi,
I am trying to overlay a dendrogram on top of an image plot, but I run into
the problem of the nodes at the root of the dendrogram not aligning properly
with the columns on my image. A simple solution to do this is to use the
function heatmap which automatically plots the tree on the top bu
I've looked through ?split and run all of the code, but I am not sure that I
can use it in such a way to make it do what I need. Another suggestion was
using "lists", but again, I am sure that the process can do what I need, but
I am not sure it would work with so many observations.
I might have
Hi,
I have a dataset which has 10 numerical and 2 categorical variables (which i
code using indicator function as we usually do...one has 3 levels, other has
2 levels).I was wondering how I could use the bctrans function available in
library(alr3) to get a desired transformation on my model which
Thanks for your ideas. They are really helpful for me to think about my
question.
Cheers,
2009/11/9 David Winsemius
>
> On Nov 9, 2009, at 8:45 AM, rusers.sh wrote:
>
> Hi Johann,
>> Excellent. That is what i really want. A little problem is why the "c.n"
>> does not exist. Should the "c.n" in th
Have you tried ezANOVA from the ez pacakge? It attempts to provide a
simple user interface to car's ANOVA (and when that fails, aov).
On Mon, Nov 9, 2009 at 1:44 PM, Sergios (Sergey) Charntikov
wrote:
> Hello everyone,
>
> I am trying to do within subjects repeated measures anova followed by the
On Mon, 9 Nov 2009, Achim Zeileis wrote:
On Mon, 9 Nov 2009, Johann Hibschman wrote:
I'm using R 2.10.0, with zoo 1.5-8. The release notes for zoo 1.5-8
claim a bug with unique for yearmon objects has been fixed, but I'm
still having problems.
1. Please report such problems (also) to the mai
Based on what you suggested I did the following:
1. Dataset$Sessn <- as.factor(Dataset$Sessn)
2. mod <- lm(cbind(Sessn=="1", Sessn=="2") ~ Trtmt, data=Dataset)
3. idata <- data.frame(Sessn=factor(1:2))
4. Anova(mod, idata=idata, idesign=~Sessn))
ERROR: The error SSP matrix is apparently of defic
yet another solution is:
vec <- c(TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE,
FALSE)
seq_len(rle(vec)$lengths[1])
I hope it helps.
Best,
Dimitris
Grzes wrote:
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking f
Try this:
head(vec, sum(cumprod(vec)))
The positions:
which(head(vec, sum(cumprod(vec
On Mon, Nov 9, 2009 at 4:44 PM, Grzes wrote:
>
> Hi !
> I have a vector:
> vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
> and I'm looking for a method which let me get only the fi
I've run into a problem building a package with R-2.10.0 that I haven't
encountered with prior versions. Build and check work fine. However, I
encounter an error at the install phase indicating it cannot open a perl
script, which is below. For completeness, I've copied my path as well showing
t
I should heed my own words: the 1% effect based on the marginal effect would
be
0.01* ABS(X) * margeff
I omitted the abs(x) in the last paragraph of my last email. Based on the
marginal effect, the expected change in probability would be 0.01*0.69*0.02,
which is 0.00138. This is not all too far
Hi all,
I'm creating a jpg file with width=1500, height=1000.
it is a graph showing 24 boxplots horizontally.
The x coordinates in the graph was not displayed in the jpg file, but it
DOES displayed in a pdf file.
Does any one know what setting I should pay attention to in order to have
the x coo
Hi,
One way would be,
vec[ cumsum(!vec)==0 ]
HTH,
baptiste
2009/11/9 Grzes :
>
> Hi !
> I have a vector:
> vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
> and I'm looking for a method which let me get only the first values equal
> TRUE from this vector. It means that I
Use which()
vec_out <- which(vec == T)
-
Justin Montemarano
Graduate Student
Kent State University - Biological Sciences
http://www.montegraphia.com
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.c
How about
vec[1:min(which(vec==FALSE))-1]
This will return a character(0) vector if vec[1] is FALSE
Nikhil
On 9 Nov 2009, at 2:38PM, David Winsemius wrote:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
__
R-help@r-project.
On Nov 9, 2009, at 1:54 PM, Roberto Patuelli wrote:
Dear Daniel,
Thanks for your prompt reply.
Indeed I was aware of the possibility of computing at mean(x) or
doing the mean afterwards.
But what you suggest is marginal effects, right?
They might be called "marginal effects" by some.
Is
Yes, it is the marginal effect. The marginal effect (dy/dx) is the slope of
the gradient at x. It is thus NOT for a 1 unit increase in x, but for a
marginal change in x. Remember that, for nonlinear functions, the marginal
effect is more accurate in predicting a change in y the smaller (!) the
chan
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
On Nov 9, 2009, at 1:44 PM, Grzes wrote:
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me get only the first values
equal
TRUE from this vector. It means that I want to get a vector:
vec_out = TRUE TRUE
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me get only the first values equal
TRUE from this vector. It means that I want to get a vector:
vec_out = TRUE TRUE TRUE TRUE
or posictions values = TRUE: vec_ou
Hello -
I am trying to figure out R's transformation for interaction terms in a
linear regression.
My simple background understanding is that interaction terms are
generally calculated by multiplying the centred (0-mean) variables with
each other and then doing the regression. However, in this r
Hi,
I have a dataset which has 10 numerical and 2 categorical variables (which i
code using indicator function as we usually do...one has 3 levels, other has
2 levels).I was wondering how I could use the bctrans function available in
library(alr3) to get a desired transformation on my model which
The output of summary prcomp displays the cumulative amount of variance explained
relative to the total variance explained by the principal components PRESENT in the
object. So, it is always guaranteed to be at 100% for the last principal component
present. You can see this from the code in s
I don't know if it's "the fastest" way, but you can get there with
as.character(factor(exData$Condition, levels=c("c20", "c10", "c9",
"c5"), labels=c("AA", "BB", "CC", "DD")))
-Ista
On Mon, Nov 9, 2009 at 2:06 PM, phoebe kong wrote:
> Hi All,
>
> I have a dataset with a column named "Condition"
Mostly it is a conceptual difference. An unordered factor is one where there
is no inherent order to the levels, examples:
Color of car
Race
Nationality
Sex
State/Country of birth
Etc.
In the above, the order of the levels could be changed without it really
changing the meaning (think of the o
Hi All,
I have a dataset with a column named "Condition",
Sample Condition
1c20
2c20
3c10
4c10
5c9
6c9
7c5
8c5
9c20
10 c10
Could you let me know the fastest way to change c20->
Dear Daniel,
Thanks for your prompt reply.
Indeed I was aware of the possibility of computing at mean(x) or doing the
mean afterwards.
But what you suggest is marginal effects, right? Isn't that the effect on y
of a 1-unit increase in x (what I was not interested in)? I'm interested in
the eff
Somebody might have done this, but in fact it's not difficult to compute the
marginal effects yourself (which is the beauty of R). For a univariate
logistic regression, I illustrate two ways to compute the marginal effects
(one corresponds to the mfx, the other one to the margeff command in Stata).
Dear Segios,
For repeated-measures designs, the Anova() function requires a multivariate
linear model fit to the "wide" version of the data set, in which each of the
repeated measures appears as a separate variable. It is necessary that you
have the same occasions observed for all subjects. For yo
Hello all:
I would like to test whether there are treatment effects on decomposition
rate, and I would like to inquire about the best, most appropriate means
using R.
I have plant decomposition data that is generally considered to follow an
exponential decay model as follows:
Wt = Wi * exp(-k *
Hi: I'm not familar with prcomp but with the principal components function
in bill revelle's psych package , one can specify the number of components
one wants to use to build the "closest" covariance matrix I don't know
what tol is doing in your example but it's not doing that
[corrected dataset below]
Hello everyone,
I am trying to do within subjects repeated measures anova followed by the
test of sphericity (sample dataset below).
I am able to get either mixed model or linear model anova and TukeyHSD, but
have no luck with Repeated-Measures Assuming Sphericity or Se
On Nov 9, 2009, at 12:59 PM, David Winsemius wrote:
On Nov 9, 2009, at 12:30 PM, Dobrozemsky Georg wrote:
Hi!
When checking validity of a model for a large number
of experimental data I thought it to be interesting
to check the information provided by
the summary method programmatically.
S
All 8 variables are still in the analysis, i am just reducing the number
of components being estimated i thought..
Example 1 component 8 variables, there is no way 1 component explains
100% of the variance of the 8 variable data set.
> princ = prcomp(df[,-1],rotate="varimax",scale=TRUE,tol=.9
On Nov 9, 2009, at 12:30 PM, Dobrozemsky Georg wrote:
Hi!
When checking validity of a model for a large number
of experimental data I thought it to be interesting
to check the information provided by
the summary method programmatically.
Still I could not find out which method to
use to get to
In the first PCA you ask how much variance of the EIGHT (!) variables is
captured by the first, second,..., eigth principal component.
In the second PCA you ask how much variance of the THREE (!) variables is
captured by the first, second, and third principal component.
Of course you need only as
Look at it linearly?
On Mon, Nov 9, 2009 at 11:45 AM, zubin wrote:
> okay, an extreme case, only 1 component, explains 100%, something weird
> going on..
>
> > princ = prcomp(df[,-1],rotate="varimax",scale=TRUE,tol=.95)
> > summary(princ)
> Importance of components:
> PC1
Hello everyone,
I am trying to do within subjects repeated measures anova followed by the
test of sphericity (sample dataset below).
I am able to get either mixed model or linear model anova and TukeyHSD, but
have no luck with Repeated-Measures Assuming Sphericity or Separate
Sphericity Tests.
I a
okay, an extreme case, only 1 component, explains 100%, something weird
going on..
> princ = prcomp(df[,-1],rotate="varimax",scale=TRUE,tol=.95)
> summary(princ)
Importance of components:
PC1
Standard deviation 1.38
Proportion of Variance 1.00
Cumulative Proportion
principal components is a data reduction technique. It looks like
you have three axes that account for 100%. Make this reporducible.
On Mon, Nov 9, 2009 at 11:37 AM, zubin wrote:
> Hello, not understanding the output of prcomp, I reduce the number of
> components and the output continues to sh
Also,
formatC(3,width=3,flag='0')
formatC and sprintf are both referenced in the "See Also" part of the
format help page.
-Don
At 9:42 AM -0600 11/9/09, Marc Schwartz wrote:
On Nov 9, 2009, at 9:34 AM, anna freni sterrantino wrote:
Hi !
I'd like to create
a vector
that has this kind of
Hello,
I have installed R version 2.9.2, and everything
works fine, but when attempting to install version 2.10.0
I get:
running code in 'datasets.R' ... OK
comparing 'datasets.Rout' to './datasets.Rout.save' ... OK
make[4]: Leaving directory
`/home/csoliver/SAT-Algorithmen/OKplatform/ExternalSo
1 - 100 of 143 matches
Mail list logo