Hi, I wanted to make sure you were all aware of these upcoming events. There
is a seminar in Predictive Analytics on Sept 24-25 in Chicago, and two
following in D.C. (Oct.) and San Francisco (Nov.). This is intensive
training for managers, marketers, and IT people who need to make sense of
custom
On Fri, 22 Aug 2008, Fiona Giannini wrote:
Hi all,
Our section at work is looking at buying a high-powered computer (32 Gb
RAM, 64-bit) to be able run models on large datasets and have more
processing power for the software we commonly use. We mostly use R to
fit GLMs and GAMs. Our departme
As my post showed, it is a scaling issue. The function has so small a
peak that it is effectively 0 -- when scaled sensibly integrate then works
out of the box.
On Thu, 21 Aug 2008, Thomas Lumley wrote:
On Thu, 21 Aug 2008, Moshe Olshansky wrote:
The phenomenon is most likely caused by nume
On Fri, 22 Aug 2008, Bingxiang Miao wrote:
Hi all,
I have a binary file which have 8*100 bytes. The structure of the file is
as follows: every eigth bytes represent 2 data:the first four bytes is the
little-endian for integer, the next four bytes is the little-endian for
floating32. The struc
Thank you so much!
--
View this message in context:
http://www.nabble.com/-help--simulation-of-a-simple-Marcov-Stochastic-process-for-population-genetics-tp19085705p19101425.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-pr
Wow, one of those might binary file format likely to be dumped from
memory by a C/C++ program. ...anyway, here is how I deal with
interveawed data types:
# Read all of the file as raw (bytes) values
fileSize <- file.info(pathname)$size;
raw <- readBin(pathname, what="raw", n=fileSize);
# Sanity c
Dear all,
Have you ever been stuck in an airport because your flight was delayed
or cancelled and wondered if you could have predicted it if you'd had
more data? This is your chance to find out.
I'd like to let you know about the American Statistical Association's
Data Expo 09. This year's chall
Hi,
I have a genotype data for both case and controls and would like to calculate
the HW p-value. However, since the number of one genotype is 0, I got wired
result. Would someone help me to figure it out? Or confirm it's right?
Thanks a lot.
> library( "genetics" )
NOTE: THIS
Hi Miao,
I can write a function which takes an integer and produces a float number whose
binary representation equals to that of the integer, but this would be an
awkward solution.
So if nobody suggests anything better I will write such a function for you, but
let's wait for a better solution.
Hi all,
Our section at work is looking at buying a high-powered computer (32 Gb RAM,
64-bit) to be able run models on large datasets and have more processing power
for the software we commonly use. We mostly use R to fit GLMs and GAMs. Our
department uses Windows as the standard OS (we are up
So i am hoping this solution is simple, which i beleive it is. I would like
to look up a value in one column of a data set and display the corresponding
value in the 2nd column. For example
TAZVACANT ACRES
100 45
101 62
102 23
103 64
104 101
105 280
So if
The K-S test has Ho: dist(A) = dist(B) and Ha: dist(A) <> dist(B).
Rejecting Ho means that maintaining dist(A) does not differ in any
way from dist(B) is untenable, because of the low P-value encountered.
If you wish to test a hypothesis, you must be able to calculate the
probability of seeing
Hi,
I've tried to figure this out using Intro to R and help(), to no avail
- I am new at this.
I'm trying to write a script that will read multiple files from a
directory and then merge them into a single new data frame.
The original data are in a tree-ring specific format, and so I've first
Thanks,
would it be possible to give an example of how I
can have more specific null hypothesis in R?
I am not aware of how to specify it for the K-S test in R.
And repeating my second question, what is a good way to measure the
difference between
observed and expected samples? Is the D statistic
Hi,
It may be the old question.
can anyone tell me how to call perl in R?
thanks
Y.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
Hi all,
I have a binary file which have 8*100 bytes. The structure of the file is
as follows: every eigth bytes represent 2 data:the first four bytes is the
little-endian for integer, the next four bytes is the little-endian for
floating32. The structure of the following bytes is as the same as
Hi Nitin,
I believe that you can not have null hypothesis to be that A and B come from
different distributions.
Asymptotically (as both sample sizes go to infinity) KS test has power 1, i.e.
it will reject H0:A=B for any case where A and B have different distributions.
To work with a finite samp
On Thu, 21 Aug 2008, [EMAIL PROTECTED] wrote:
Hi all,
I have a matrix of about 100.000 x 4 that I need to classify using
euclidean metric. For that I am using dist or daisy functions, but I
am afraid that the message: Error in vector("double", length) : vector
size specified is too large, means
Gary Collins wrote:
Any thoughts on the following I'd be most grateful - I'm sure there is
an easy and quick way to do this but I'm having a mental block this
evening. Essentially, I'm trying to replace missing data in my dataset
with reference values based on age and sex.
So an example datas
Hello,
We would like to perform a cross validation on a linear mixed model (lme)
and wonder if anyone has found something analogous to cv.glm for such
models?
Thanks, Mark
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing li
We are attempting to use nlme to fit a linear mixed model to explain bird
abundance as a function of habitat:
lme(abundance~habitat-1,data=data,method="ML",random=~1|sampleunit)
The data consist of repeated counts of birds in sample units across multiple
years, and we have two questions:
1)
Hello,
I have a question. Suppose that I have a function to estimate with gam (in the
mgcv package),
y=s(x1)+s(x2)+XB
where X is a vector of exogenous variables and x1 and x2 are explanatory
variables assumed parametric linear functions of X and other exogenous
variables Z. Is there a way to
Perform rollapply over the index rather than the series itself: The
result, sin.mx is a zoo series with three columns: the original
series, the series of maxima and their index into the original series.
library(zoo)
library(chron)
t1 <- chron("1/1/2006", "00:00:00")
t2 <- chron("1/31/2006", "23:
Hi,
I had a question about specifying the Null hypothesis in a significance
test.
Advance apologies if this has already been asked previously or is a naive
question.
I have two samples A and B, and I want to test whether A and B come from
the same distribution. The default Null hypothesis would be
library(zoo)
library(chron)
t1 <- chron("1/1/2006", "00:00:00")
t2 <- chron("1/31/2006", "23:45:00")
deltat <- times("00:15:00")
tt <- seq(t1, t2, by = times("00:15:00"))
d <- sample(33:700, 2976, replace=TRUE)
sin.zoo <- zoo(d,tt)
#there are ninety six reading in a day
d.max <- rollapply(sin.zoo,
On Thu, 21 Aug 2008, Moshe Olshansky wrote:
The phenomenon is most likely caused by numerical errors. I do not know
how 'integrate' works but numerical integration over a very long
interval does not look a good idea to me.
Numerical integration over a long (or infinite) interval is fine. The
Anyone who can help me with the following question?
How can I add weight to [x,y] coordinates on a graph/scatterplot?
Background:
Monte Carlo simulation generated 730,000 [x,y] coordinates with a weight
attached (from 0-0.5).
Both x and y are rounded and fit on a raster with x-axis 0-170 mo
Hi,
I had a question about specifying the Null hypothesis in a significance
test.
Advance apologies if this has already been asked previously or is a naive
question.
I have two samples A and B, and I want to test whether A and B come from
the same distribution. The default Null hypothesis would be
Altaweel, Mark R. wrote:
Hi,
I am trying to do a kruskal wallis test on two lists, fVisited and cSN:
fVisited[[1]]
[1] 0.17097623 0.30376141 0.17577266 0.14951855 0.03959753 0.08096217
0.05744888 0.02196257
cSN[[1]]
[1] 0.08557303 0.36477791 0.19601252 0.12981040 0.05351320 0.1038554
On Tue, 19 Aug 2008, Farley, Robert wrote:
While I'm trying to catch up on the statistical basis of my task, could
someone point me to how I should fix my R error?
The variables in the formula in rake() need to be the raw variables in the
design object, not summary tables.
-thomas
You can get point estimates by supplying the sampling weights as weights
to the quantile regression functions in Roger Koenker's quantreg package.
This is useful for smoothing (with the rqss() function; it is not clear
how useful it is for straight line regression.
You should get valid inte
Hi all,
I have a matrix of about 100.000 x 4 that I need to classify using
euclidean metric. For that I am using dist or daisy functions, but I
am afraid that the message: Error in vector("double", length) : vector
size specified is too large, means too much lines.
Can anyone suggest me how shou
merge() has by.x and by.y arguments. If you use them, you can merge
data frames that have different column names. You can specify columns
by name or by number. This is mentioned in the help for merge.
Try
merge(Data1, Data2, by.x=1, by.y=2)
which will keep all of the columns in Data2, or
Hi,
I would like to skip a preset number of values when reading in a binary file
(readBin). Is there any way to do this?
kind regards,
and thanks in advance for the help.
Pieter Beck
[[alternative HTML version deleted]]
__
R-help@r-pr
Hi, there.
I am looking for a package to fit the following binomial-normal model
Y_ij ~ Binomial (N_ij, P_ij)
Logit(P_ij) = \beta_0i+\beta_1*X+e_ij
\beta_0i ~ N(\beta_0,\sigma_b^2)
e_ij ~ N(0,\sigma^2)
This model has two variance components, one random effect \beta_0i and
one error e
Hi,
I am not sure it is the best to use a binomial distribution for a continuous
bounded variable. A beta distribution would be more appropriate, although I
don't know how to define one for the gam() function. On the other hand beta
distribution is closely linked to the gamma distribution so m
Hi,
I am trying to do a kruskal wallis test on two lists, fVisited and cSN:
fVisited[[1]]
[1] 0.17097623 0.30376141 0.17577266 0.14951855 0.03959753 0.08096217
0.05744888 0.02196257
cSN[[1]]
[1] 0.08557303 0.36477791 0.19601252 0.12981040 0.05351320 0.10385542
0.03539577 0.03106175...
Any thoughts on the following I'd be most grateful - I'm sure there is
an easy and quick way to do this but I'm having a mental block this
evening. Essentially, I'm trying to replace missing data in my dataset
with reference values based on age and sex.
So an example dataset is
set.seed(1)
X =
Hi,
[EMAIL PROTECTED] wrote:
I have tried
mean(Incubation)
and
mean(as.numeric(Incubation))
what about
mean(Incubation, na.rm=TRUE)
?
I get the following result:
> mean(Incubation, na.rm=TRUE)
Time difference of 4.295455 hours
>
but I think that, since there are so many NA values in Incu
Thanks for all the help. I'll do some more reading.
Brian
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, se
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> [EMAIL PROTECTED]
> Sent: Thursday, August 21, 2008 12:15 PM
> To: r-help@r-project.org
> Subject: [R] mean for vector with NA
>
> I am trying to find the mean for the elements in the vector
>
> In
I am trying to find the mean for the elements in the vector
Incubation=as.POSIXlt(OnsetTime)-as.POSIXlt(MealTime)
where
OnsetTime=c(NA,"1940-04-19 00:30","1940-04-19 00:30","1940-04-19
00:30",NA,"1940-04-18 22:30","1940-04-18 22:30","1940-04-19
02:00","1940-04-19 01:00","1940-04-18 23:00",N
Hi
Avram Aelony wrote:
Dear R community,
I find R fantastic and use R whenever I can for my data analytic needs.
Certain data sets, however, are so large that other tools seem to be
needed to pre-process data such that it can be brought into R for
further analysis.
Questions I have for t
Dear R list members,
How to produce barplots anchored to the x-axis (not floating
above the x-axis) with a box around?
With both following codes, the lower horizontal line of the
box is below the y = 0 line:
# first code
x <- c(1,2,3,4)
barplot(x,yaxs='i')
box()
# second code
x <- c(1,2,3,4)
RSQLite package can read files into an SQLite database without the data going
through R. sqldf package provides a front end that makes it
particularly easy to
use - basically you need only a couple of lines of code. Other databases have
similar facilities. See:
http://sqldf.googlecode.com
On Th
Dear R community,
I find R fantastic and use R whenever I can for my data analytic
needs. Certain data sets, however, are so large that other tools
seem to be needed to pre-process data such that it can be brought
into R for further analysis.
Questions I have for the many expert contrib
Thanks to everyone who responded. It was all very helpful.
AA.
A.Ajakh wrote:
>
> Hi All,
>
> Imagine that we have a function defined as follows:
> foo <-
> # comments line 1
> # comments line 2
> # etc..
> function(x){
> x <- 2
> }
> I can use args(foo) to get the arguments for foo. Is there
Hi,
I have this message:
Warning message:
In mer_finalize(ans, verbose) :
iteration limit reached without convergence (9)
How to increase this limit?
I try control=list(maxit=1000) I try maxIter and it dont work.
Any idea
Thanks
Ronaldo
--
Opportunities are usually disguised as hard work, s
Hello,
I've been struggling for more than a few hours with a very simple problem
having to do with axis labeling and haven't had any success searching in the
online resources for the proper syntax. This is the basic task I'm trying
to accomplish:
I want to label the tick marks on a contour grap
Hi,
I am working on calculating X^2 for some matrix (most of them have either two
rows or 2 columns) by using chisq.test in R. However when there are 0s in the
matrix, chisq.test does not work. For example:
> elements <- matrix( c( 0, 0, 9, 5, 71, 168), nr = 2 )
> element
No, a higher p-value does not mean the variances are more likely to be 0, this
is a common misconception. If the variances are equal (the null hypothesis is
true), then for an accurate test the p-value will follow a uniform(0,1)
distribution, so p-values of 0.01, 0.049, 0.051, 0.5, 0.99, and 1
On Thu, 21 Aug 2008, Luke Tierney wrote:
On Wed, 20 Aug 2008, Eric Rupley wrote:
(1) ...need to speed up a monte-carlo sampling...any suggestions about how
I can get R to use all 8 cores of a mac pro would be most useful and very
appreciated...
Using something like the snow package for
On Wed, 20 Aug 2008, Eric Rupley wrote:
(1) ...need to speed up a monte-carlo sampling...any suggestions about how I
can get R to use all 8 cores of a mac pro would be most useful and very
appreciated...
Using something like the snow package for explicitly parallelizing
your computations
Dan,
The real problem is the use of csv files. csv files don't handle missing
values
("#VALUE" is most likely from Excel), dates, or other complications very
well.
Read your Excel file directly into
R with one of the packages designed specifically for that purpose. I
recommend
RExcel (Windows o
On Thu, Aug 21, 2008 at 04:20:57PM +0100, Williams, Robin wrote:
> Hi Dan,
> Thanks for the reply, yes, I am using read.csv on the attached file.
OK, so how about using the colClasses argument. Your problem is that
some malfunctioning software has inserted the value "#VALUE!" into
some of your
On 08/21/08 17:48, Mario Maiworm wrote:
> >>> Two comments. First, it isn't clear to me why you want the upper
> >>> bound to differ from 1. Apparently you have some theoretical reason
> >>> for using a cumulative gaussian. Wouldn't the same theory tell you
> >>> that the upper bound should be 1
>>> Two comments. First, it isn't clear to me why you want the upper
>>> bound to differ from 1. Apparently you have some theoretical reason
>>> for using a cumulative gaussian. Wouldn't the same theory tell you
>>> that the upper bound should be 1?
the reason why I use cumulative gaussians is
Dear all,
I need your advice since I am looking for an implementation of the parabolic
cylinder function in R. I found implemantations of the hypergemetric
functions (the Whittaker and the confluent hypogeometric functions) in the
package fAsianOptions but the parabolic cylinder function was unfor
> is it possible to sort a file by the row names?
I presume you mean a data frame rather than a file. In which case the
answer is yes:
df <- data.frame(a=1:10, b=rnorm(10), row.names=sample(letters[1:10]))
df[order(rownames(df)),]
Regards,
Richie.
Mathematical Sciences Unit
HSL
-
On Thu, 21 Aug 2008, someone without a real name wrote:
is it possible to sort a file by the row names?
No, as files does not have row names. But data frames do, so perhaps
DF[sort.list(row.names(DF)), ]
is what you are looking for.
--
Brian D. Ripley, [EMAIL PROTECTED]
P
I have two set of data, Data1 and Data2 . Data1 has a header and Data2 does
not. I would like to merge the two data sets after removing some columns
from data2 .
I am having a problem merging so I had to write and read final data and
specify the “header=F” so the merge can be done by”V1”. Is the
is it possible to sort a file by the row names?
--
View this message in context:
http://www.nabble.com/sorting-by-the-row-names-tp19090189p19090189.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
http
Dear Members,
I am working on Elastic net and using R package for that. I
have two matrix. My response is a matrix of size 50X50 and predictor is also
in same
size. I want to extract only cloumns from the matrix and do the elastic net
analysis then store them as a matrix.
library(ela
There is a nice paper by Yssaad-Fesselier and Knoblauch on "Modelling
Psychometric Functions in R".
http://hal.archives-ouvertes.fr/docs/00/13/17/99/PDF/B125.pdf
You might also be interested in this:
http://www.journalofvision.org/5/5/8/article.aspx
which comes from the same group as the psignifi
Hi Robin,
You haven't said where you're getting the data from. But if the answer
is that you're using read.table, read.csv or similar to read the data
into R, then I advise you to go back to that stage and get it right
from the outset. It's very, very common to see people who are
relatively new to
This seems to be FAQ Q7.10
On Thu, 21 Aug 2008, Williams, Robin wrote:
Hi all,
I am very confused with class.
I am looking at some weather data which I want to use as explanatory
variables in an lm. R has treated these variables as factors (i.e. with
different levels), whereas I want them tre
Williams, Robin metoffice.gov.uk> writes:
>
> Hi all,
> I am very confused with class.
> I am looking at some weather data which I want to use as explanatory
> variables in an lm. R has treated these variables as factors (i.e. with
> different levels), whereas I want them treated as discrete
Hi Mario,
in many applications there is not much difference between logistic and Gaussian
distributions (just as logit and probit models often produce similar fits)...
Moreover it's possible to fit sigmoidal curves using models such as the
(log-)logistic
where the lower and/upper limits are esti
> > >>> > -Original Message-
> > >>> > From: [EMAIL PROTECTED]
> > >>> > [mailto:[EMAIL PROTECTED] On Behalf Of Mario Maiworm
> > >>> > Sent: Thursday, August 21, 2008 7:05 AM
> > >>> > To: r-help@r-project.org
> > >>> > Subject: [R] psychometric functions
> > >>> >
> > >>> > Hi,
> > >>> >
Hi all,
I am very confused with class.
I am looking at some weather data which I want to use as explanatory
variables in an lm. R has treated these variables as factors (i.e. with
different levels), whereas I want them treated as discretely measured
continuous variables. So I need to reassign t
Bernhard and Irina,
While tailoring the Rcmdr menu directly does work, I strongly do not
recommend it. It will lead to mass confusion, most likely for the author,
several
months from now when it is installed on a different machine or when R or
Rcmdr is updated.
It is much cleaner to have a sepa
I don't know that you would be too far away from being able to use R
from doing this. I haven't thought about this model, but if you could
write out the likelihood, you *might* be able to use the MML procedures
that is used for similar psychometric functions in ltm, which I think
uses optim()
> --
z3mao gmail.com> writes:
>
>
> Hi, this is my first time using R. I want to simulate the following process:
> "in a population of size N, there are i individuals bearing genotype A, the
> number of those bearing A is j in the next generation, which following a
> binominal distribution (choose j
Thank you harold. Hmm thats bad news. I will have a look at the LTM package
but right now I feel like I should lean back and use matlab, and then get
the fit results into R for further analyses...
mario
__
Mario Maiworm
Biological P
On Thu, Aug 21, 2008 at 03:00:51AM -0700, z3mao wrote:
>
> Hi, this is my first time using R. I want to simulate the following process:
> "in a population of size N, there are i individuals bearing genotype A, the
> number of those bearing A is j in the next generation, which following a
> binomin
Yes, indeed, the ltm package has the function tpm() that can fit
Birnbaum's three parameter model that allows for a guessing parameter
(i.e., when the ability levels \theta -> -Inf, the probability for
correct response is allowed to be nonzero) but it does not have a
function to incorporate
Hi Yasir,
Try the following reference:
A heuristic approach for the generation of multivariate random samples
with specific marginal distributions and correlation matrix, Dimos C.
Charmpis and Panayiota L. Panteli, Computational Statistics 19, 283-300,
2004.
I have the R code, please let me know
I am pretty certain a function for this model does not exist. Jan
Deleeuw or Dimitris Rizopolous may suggest otherwise. There is a package
for a model that would allow for the lower asymptote of the function to
be > 0; it does not however, allow the upper asymptote to vary from 1
(well, it tends to
Hi Ben,
Try the following reference:
Implementing Statistical Criteria To Select Return Forecasting Models:
What do We Learn? By Peter Bossaerts and Pierre Hillion, Review of
Financial Studies, Vol. 12, No. 2.
I have created an R function which implements Bossearts and Hillion's
methodologies. I
tnx for the nice sollutions, I'll look into them because they look much
prettier then my current sollution, quick and durty I replaced some lines in
boxplot$stats with the following:
#replace quantile
for (i in 1:12) {
z$stats[1,i] <- max(reeks$VALUE * match(reeks$MONTH, i), na.rm = TRUE)
Joris Meijerink wrote:
Hi,
I'm new to the whole R-thing as a replacement for Matlab, not disappointed
sofar ;)
I found out how to make nice looking boxplots, but i also would like the make a
boxplot with 5% and 95% instead of the standard 25 and 75% quantiles.
My csv input looks something li
Hi,
I want to fit some psychophysical data with cumulative gaussians. There is
quite a convenient toolbox for matlab called 'psignifit' (formerly known as
'psychofit'). It allows the lower bound of the sigmoid to vary slightly from
zero, aswell as the upper bound to vary from one. with these two fr
Joris,
I found this (http://ceae.colorado.edu/~balajir/r-session-files/) on the web.
It will do exactly what you want. Get the files:
myboxplot-stats.r
myboxplot.r
Leesferry-mon-data.txt <= example data
The usage is:
#Boxplots
#Source the ‘myboxplot’ codes from Balaji’s directory.
source("
On Thu, 21 Aug 2008, [EMAIL PROTECTED] wrote:
Hi
I have a question (which may be an obvious one). It is about an idiom
which I have seen quite often:
o <- order(x); <- x[o]
vs. the alternative
x <- sort(x)
I am just wondering as to the rationale behind the order/reindex idiom vs
sorting.
Hi
I have a question (which may be an obvious one). It is about an idiom which I
have seen quite often:
o <- order(x); <- x[o]
vs. the alternative
x <- sort(x)
I am just wondering as to the rationale behind the order/reindex idiom vs
sorting. Especially as there seems to be a marked performa
> Ill[Ill != "No"]
My entry for the obfusacted R-contest:
IlI<-'No';Ill[Ill!=IlI]
SCNR
Philipp
--
Dr. Philipp Pagel
Lehrstuhl für Genomorientierte Bioinformatik
Technische Universität München
Wissenschaftszentrum Weihenstephan
85350 Freising, Germany
http://mips.gsf.de/staff/pagel
_
Hi All,
I am trying to estimate a functional form in terms of cumulants /
moments from the results of a MC simulation. My first idea was to use an
Edgeworth expansion, but this looks pretty difficult to implement.
Before I go ahead and write some code to do this, could anyone tell me
if this has al
Hi, this is my first time using R. I want to simulate the following process:
"in a population of size N, there are i individuals bearing genotype A, the
number of those bearing A is j in the next generation, which following a
binominal distribution (choose j from 2*N, the p is i/2*N), to plot the
Dear Irina,
though you asked explicitly for writing a RCommander-plugin package; I
just wanted to add that the former approach of tailor-making menues in
the Commander still works. That is, just copy your R file with the
tcl/tk functions into the /etc directory of the RCommander and include
your m
On Thu, 21 Aug 2008, Christoph Scherber wrote:
Dear all,
Thanks to Brian Ripley for pointing this out. If I understand it correctly,
this would mean that looking at the parameter estimates, standard errors and
P-values in summary.lme only makes sense if no interaction terms are present?
You
Christoph Scherber wrote:
> Dear all,
>
> Thanks to Brian Ripley for pointing this out. If I understand it
> correctly, this would mean that looking at the parameter estimates,
> standard errors and P-values in summary.lme only makes sense if no
> interaction terms are present?
Yes and no. What it
Dear all,
Thanks to Brian Ripley for pointing this out. If I understand it correctly, this would mean that
looking at the parameter estimates, standard errors and P-values in summary.lme only makes sense if
no interaction terms are present?
My conclusion would then be that it is better to rel
Please read the help for anova.lme, and note the 'type' argument. You are
comparing apples and oranges here (exactly as if you did this for a linear
model fit).
Because you have a three-way interaction in your model, looking at the
(marginal) t-tests for any other coefficient than the third-o
To all:
Thank you for your suggestions for help with this. I've not yet had a
chance to investigate these things yet, but will do so soon!
Again, thanks for the suggestions.
Nicky Chorley
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailm
Dear all,
When analyzing data from a climate change experiment using linear mixed-effects
models, I recently
came across a situation where:
- the summary(model) showed a significant difference between the levels of a
two-level factor,
- while the anova(model) showed no significance for that fa
Hi Daren,
>> Small progress, ...
m4 <- list(m1=m1, m2=m2, m3=m3)
boxplot(m4)
It's always a good idea to have a look at your data first (assuming you
haven't). This shows that the reliable instrument is m2.
HTH, Mark.
Daren Tan wrote:
>
>
> Small progress, I am relying on levene test to che
The phenomenon is most likely caused by numerical errors. I do not know how
'integrate' works but numerical integration over a very long interval does not
look a good idea to me.
I would do the following:
f1<-function(x){
return(dchisq(x,9,77)*((13.5/x)^5)*exp(-13.5/x))
}
f2<-function(y){
Jim Lemon schrieb:
On Wed, 2008-08-20 at 13:36 +0200, Stefan Uhmann wrote:
Dear R-Helpers,
I need the centroid of circular data and (because the function used does
not provide the centroid coordinates, or did I miss sth.?) tried it via
the indirect way and just computed the cartesian coordina
Hi,
I'm new to the whole R-thing as a replacement for Matlab, not disappointed
sofar ;)
I found out how to make nice looking boxplots, but i also would like the make a
boxplot with 5% and 95% instead of the standard 25 and 75% quantiles.
My csv input looks something like:
LOCATIONFILTE
From ?integrate:
When integrating over infinite intervals do so explicitly, rather
than just using a large number as the endpoint. This increases
the chance of a correct answer - any function whose integral over
an infinite interval is finite must be near zero for most of th
100 matches
Mail list logo