I am calling fft and getting a "non-numeric" error:
+ fit <- lm(Quantity ~ DayOfYear, .sublist)
+ # Make the time series
+ x <- as.numeric(rep(0,512))
+ x <- merge(residuals(fit), x)
+ # Transform range to -pi - pi
+ x <- x - pi
+ x <- x * (2
Hi
Dbfile contains
"bin" "TCC_TCA" "TCA_CR""TCC_CR""Time" "sn.rnc"
1 301 38 365 (08/28/08 00:00:02) "50.20"
2 302 39 358 (08/28/08 00:00:07) "50.20"
3 319 43 377 (08/28/08 00:00:12) "50.2
>>Tdf
>
> bin TCC_TCA TCA_CR TCC_CR Timesn.rnc
>
> 117 117 258 27314 (08/28/08 00:09:42) 50.21
>
> 118 118 251 30291 (08/28/08 00:09:47) 50.21
>
> 119 119 247 28289 (08/28/08 00:09:52) 50.21
>
> 120 120 251 29282 (08/28/08 00:09
Hi
>Tdf
bin TCC_TCA TCA_CR TCC_CR Timesn.rnc
117 117 258 27314 (08/28/08 00:09:42) 50.21
118 118 251 30291 (08/28/08 00:09:47) 50.21
119 119 247 28289 (08/28/08 00:09:52) 50.21
120 120 251 29
Hi, All. I'd like to calculate effect sizes for aov or lme and seem
to have a bit of problems. partial-eta squared would be my first
choice, but I'm open to suggestions.
Here is the aov version:
> fit.aov <- (aov(correct ~ cond * palette + Error(subject),
data=data))
> summary(fit.aov)
Hi Yuan,
It is not ellegant, but may work for you..
f<-as.factor(c("a","b","a"))
f.freq<-data.frame(table(f))
f.freq
lower.freq<-2
f.freq.subset<-subset(f.freq,f.freq$Freq>=lower.freq)
f.freq.subset
f.selected<-f[f %in% f.freq.subset$f]
f.selected<-factor(f.selected)
f.selected
Best wishes,
Hi,
how to remove levels that have less than a specific number such as 2. i.e..
> f<-as.factor(c("a","b","a"))
> f
[1] a b a
Levels: a b
I want to remove level b because level b has less than 2.
> f
[1] a a
Levels: a
[[alternative HTML version deleted]]
__
Evening all:
Stepping away from the stats methodology questions for a moment, I have a
housekeeping question for when it comes time to make the jump to v2.7.2.
I'm running v2.7.1 on an XP system. I have a suspicion that, by way of
experimentation with a couple of shell choices along the way,
I'm looking for something along the lines of
which ( table ( x ) == max ( table ( x ) ) )
to find the most common level of one factor
by several other factors. For instance, I've got
> X <- data.frame (
+ x = factor ( sample ( c ( "A" , "B" , "C" , "D" ) , 20 , r = T ) )
+ , z1 = fac
Thanks Peter!
I hadn't realized those packages were already installed. I require("pkg")
and so far it seems OK. Anyway, I also installed R-devel as I think I may
need to install more packages in the near future.
Thanks all of you who answered.
Regards,
Omar
Peter Dalgaard wrote:
>
> Nose
Dear Goran,
I need to find an EM algorithm for finding the MLE estimates for a
negative binomial distribution.
I saw your web post on the R help network and wonder if you might be
willing to share your code or algorithm.
It would be greatly appreciated.
Thank you,
Dave LeBlond
Sr. Statistici
All -
My question is a bit involved, so bear with me.
I have some data that looks like:
LakeLL LW
81 2.176091259 1.342422681
81 2.176091259 1.414973348
81 2.176091259 1.447158031
81 2.181843588 1.414973348
81 2.181843588 1.447158031
81 2.
On Fri, Aug 29, 2008 at 2:01 PM, Michael Grant
<[EMAIL PROTECTED]> wrote:
> Why Model II Regression? My experience is that for purposes of
> prediction, the difference between Model I and Model II fits can be
> quite significant, mostly, of course, near the extremes of the predictor
> variable(s).
That slipped right away from me before I could say I meant to add this
link to a useful thread from Torsten Hothorn.
?RSiteSearch would probably have got you there.
http://tolstoy.newcastle.edu.au/R/help/05/06/5829.html
Mark Difford wrote:
>
> Hi Stephen,
>
> See packages:
>
> coin
> n
An anova with sites as the independent variable, you mean?
My suggestion: Forget formal inference and multiple testing (Tukey HSD),
order the sites (i.e. their levels with site as a factor) "appropriately,"
which means what you decide it should on the basis of the site
characteristics, locations,
Hi Stephen,
See packages:
coin
nparcomp
npmc
There is also kruskalmc() in package pgirmess
Regards, Mark.
stephen sefick wrote:
>
> I have insect data from twelve sites and like most environmental data
> it is non-normal mostly. I would like to preform an anova and a means
> seperation lik
I have insect data from twelve sites and like most environmental data
it is non-normal mostly. I would like to preform an anova and a means
seperation like tukey's HSD in a nonparametric sense (on some sort of
central tendency measure - median?). I am searching around at this
time on the internet
Why Model II Regression? My experience is that for purposes of
prediction, the difference between Model I and Model II fits can be
quite significant, mostly, of course, near the extremes of the predictor
variable(s). There are several approaches and most give pretty similar
model parameters but mo
Hi Martin,
Sorry for the late reply. I realize this might now be straying too
far from r-help, if there is a better forum for this topic (R use
with Hadoop) please let me know.
I agree it would indeed be great to leverage Hadoop via R syntax or R
itself. A first step is figuring out ho
Hi Dylan,
>> While this topic is fresh, are there any compelling reasons to use Model
>> II
>> regression?
The fact that it is the type of regression used in principal component
analysis makes it a compelling method. Compelling reason? It is used to take
account of measurement errors in both y
On Fri, 29 Aug 2008, Giovanni Petris wrote:
You can cut execution time by a factor 2 simply using the fact that the
double summation is symmetric in the indices j and k:
2 * sum(sapply(1:(m-1), function(k){sum(sapply((k-1):m,
function(j){x[k]*x[j]*dnorm((mu[j]+mu[k])/sqrt(sig[k]+sig[j]))/sqr
Hi Hadley,
There is also locfit, which is very highly regarded by some authorities
(e.g. Hastie, Tibs, and Friedman).
Cheers, Mark.
hadley wrote:
>
> Hi all,
>
> Do any packages implement density estimation in a modelling framework?
> I want to be able to do something like:
>
> dmodel <- d
On Friday 29 August 2008, Mark Difford wrote:
> Hi Danilo,
>
> >> I need to do a model II linear regression, but I could not find out
> >> how!!
>
> The smatr package does so-called model II (major axis) regression.
>
> Regards, Mark.
While this topic is fresh, are there any compelling reasons to
Its not clear what the setup is but in the zoo package:
- zoo and zooreg can create time series
- merge.zoo and cbind.zoo can merge time series
- as.ts.zoo can convert a series to ts filling in missing times
- as.zoo.ts can convert a ts series back to zoo
- coredata and time can pick out the data a
> "hw" == hadley wickham <[EMAIL PROTECTED]>
> on Fri, 29 Aug 2008 14:03:45 -0500 writes:
hw> Hi all, Do any packages implement density estimation in
hw> a modelling framework? I want to be able to do
hw> something like:
hw> dmodel <- density(~ a + b, data = mydata)
Greetings,
Is there a way to control the number of digits after the decimal from linear
regression output using the memisc package?
I have tried the following code, but it does not work:
fm <- lm(y ~ X)
mtable(fm, digits=9)
The default seems to be 3 digits after the decimal.
Also, is there a
I have a bunch of lists that are essentially time-series with the unit of time
being 'day'. So I naturally want to generate a time-series from 1:365. I was
wondering if there is a nifty 'R' trick to turn a list with missing data (the
list may contain values at 100, 230, and 360) into a time seri
Hi all,
Do any packages implement density estimation in a modelling framework?
I want to be able to do something like:
dmodel <- density(~ a + b, data = mydata)
predict(dmodel, newdata)
This isn't how sm or KernSmooth or base density estimation works. Are
there other packages that do density e
Hi Danilo,
>> I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
>
> I need to do a model II linear regression, but I could not find out how!!
>
> I tryed to use the
Hi Danilo,
>> I need to do a model II linear regression, but I could not find out how!!
The smatr package does so-called model II (major axis) regression.
Regards, Mark.
Danilo Muniz wrote:
>
> I need to do a model II linear regression, but I could not find out how!!
>
> I tryed to use the
You can cut execution time by a factor 2 simply using the fact that the
double summation is symmetric in the indices j and k:
2 * sum(sapply(1:(m-1), function(k){sum(sapply((k-1):m,
function(j){x[k]*x[j]*dnorm((mu[j]+mu[k])/sqrt(sig[k]+sig[j]))/sqrt(sig[k]+sig[j])}))}))
+ sum(x^2*dnorm((2*mu
Dear Danilo:
Here is one approach with the formal reference being:
Computational Statistics & Data Analysis 23 ( 1997 ) 355-372
COMPUTATIONAL
STATISTICS
& DATA ANALYSIS
Generalization of the geometric mean
functional relationship
Norman R. Draper, Yonghong (Fred) Yang
Department of Statistics, 12
On Fri, 2008-08-29 at 15:37 -0200, Danilo Muniz wrote:
> I need to do a model II linear regression, but I could not find out how!!
>
> I tryed to use the lm function, but I did not discovered how to specify the
> model (type I or type II) to the function... could you help me?
Jari Oksanen and Pie
Although having Doug comment directly would be better, I think it's fair to
say on the basis of his many previous posts on exactly this issue that it's
actually a bit more problematic than Greg may have indicated. It's not that
the code has not yet been written -- it's that Doug Bates, who knows as
I need to do a model II linear regression, but I could not find out how!!
I tryed to use the lm function, but I did not discovered how to specify the
model (type I or type II) to the function... could you help me?
--
Danilo Muniz
[Gruingas Abdiel]
[[alternative HTML version deleted]]
_
I read this very brief chapter, and don't see how this would address the
issues I raise. Can you provide any further hints? Sorry, I may be missing
something obvious.
-- DC
On Fri, Aug 29, 2008 at 4:07 AM, Dieter Menne
<[EMAIL PROTECTED]>wrote:
> D Chaws gmail.com> writes:
>
> > Say, for inst
On Thu, 28 Aug 2008, Farley, Robert wrote:
I'm feeling like I just don't get it. My attempt at rake now fails
with:
Error in postStratify.survey.design(design, strata[[i]],
population.margins[[i]], :
Stratifying variables don't match
Ah. Now we have an easy one to fix. This means that the
On Friday 29 August 2008 16:54:26 Peter Dalgaard wrote:
> Nose Nada wrote:
> > Hello there!
> >
> > I'm running R 2.7.1 in a "x86_64-redhat-linux-gnu" platform. While some
> > packages like "distrib" installed smoothly, I'm having problems to
> > install "nlme" and "lattice". I have tried both "i
Hello,
I'm a graduate student in Genetics, who has just started working with R. I
have been trying to do a k-means clustering of an expression data
compilation, which has lots of NA values in it. As suggested in a couple of
earlier posts, I tried using na.omit() and the MICE imputation algor
Thanks very much Greg,
Actually, that was every bit as helpful as I had hoped for, possibly
even a little more so! The response was very quick, for which I am
grateful, and not only did I understand it, but it also succeeded in
reassuring me that I had't just grasped the wrong end of the stick. I
Nose Nada wrote:
> Hello there!
>
> I'm running R 2.7.1 in a "x86_64-redhat-linux-gnu" platform. While some
> packages like "distrib" installed smoothly, I'm having problems to install
> "nlme" and "lattice". I have tried both "install.packages()" and R CMD
> INSTALL. For example,
>
[etc]
Red
Hello there!
I'm running R 2.7.1 in a "x86_64-redhat-linux-gnu" platform. While some
packages like "distrib" installed smoothly, I'm having problems to install
"nlme" and "lattice". I have tried both "install.packages()" and R CMD
INSTALL. For example,
[EMAIL PROTECTED] Download]# R CMD INSTAL
The answer to your question gets a bit into the philosophy of programming and
data storage. Is the order of levels of a factor a property of the plot? Or a
property of the factor/data itself?
Some programs see this as a property of the plot, so you specify the order at
the time you create th
Hi everyone. My problem is about multivariate analysis. Given a multivariate
data X, of dimension (n by p), n is the number of observations while p is the
number of variables. I wanted a sample of size p+1 that will have its product
of the eigen values of the covariance matrix to be the minimum
The key line in the error message is: "Update not yet written".
The lme4 package and functions in it are a work in progress, Dr. Bates is doing
a great job on getting parts done and is making the parts that are done
available for people to use, test, and comment on, but he only has so much
time
I'm trying to create a graph using plot() with an axis that I
essentially want to plot the categories in the reverse alphanumeric
order - opposite of the typical R fashion. Is there a function to do
this?
In other words, I have categories "a", "c", "g", "z" which is the
order they'll be plo
I'm interested in the R packages svLab and the recently announced
denstrip.
For svLab, I've seen a PDF from 2003 describing it, and at least one
paper or description having used it. I could not find it in the
download list of packages within R or on CRAN, however. Did it get
subsumed or renamed
> "PD" == Peter Dalgaard <[EMAIL PROTECTED]>
> on Fri, 29 Aug 2008 15:52:15 +0200 writes:
PD> Martin Maechler wrote:
>> I strongly agree with your last paragraph,
>> and I have always thought that we should recommend using
>> R-aware editors rather than dump() nowadays
Hello,
Maybe I missed something - most likely .:-(
I create a gplot and then makes some changes to the plot using grid graphics
functions. These changes show up on the display OK, but when I save using
ggsave() the grid changes do not show up. How do I save the plot with these
changes?
Thank
I do not know if this is related, but for some functions to work properly
cookies, java, and/or javaqscript need to be enabled. Some of the current
pages do not check for requirements and warn, they just don't work properly.
Just a thought of something else to try...
EBo --
Duncan Temple La
Martin Maechler wrote:
> I strongly agree with your last paragraph,
> and I have always thought that we should recommend using
> R-aware editors rather than dump() nowadays ...
> but then I thought that I've been biased at all times, being a
> co-developer of ESS, authoring its M-x ess-fix-miscell
huang min gmail.com> writes:
>
> HI,
>
> I would like to extract the variance components estimation in lme function
> like
>
> a.fit<-lme(distance~age, data=aaa, random=~day/subject)
>
> There should be three variances \sigma_day, \sigma_{day %in% subject } and
> \sigma_e.
>
> I can extract
> "DM" == Duncan Murdoch <[EMAIL PROTECTED]>
> on Fri, 29 Aug 2008 08:36:12 -0400 writes:
DM> On 28/08/2008 10:46 AM, Marie Pierre Sylvestre wrote:
>> Dear R users,
>>
>> I am currently writing a R package and to do so I am following the
>> guidelines in manual 'Wr
Hello,
Firstly let me explain that the nature of what I want to do is actually
beyond my statistical knowledge, having only taken a second year
university stats course last year. Therefore I may have missed the
statistical essence of what I want to do as well as my lack of ability
to do it in R.
On 28/08/2008 10:46 AM, Marie Pierre Sylvestre wrote:
Dear R users,
I am currently writing a R package and to do so I am following the
guidelines in manual 'Writing R extensions'.
In Section 3.1, it is suggested to tidy up the code using a file
containing the following:
options(keep.source = FA
Hi,
I am having problems trying to assess the significance of random terms
in a generalized linear mixed model using lme4 package. The model
describes bird species richness R along roads (offset by log length of
road log_length) as a function of fixed effects Shrub (%shrub cover) and
Width (width
Thanks Peter, it's a good solution.
Finding on RSiteSearch I found a similar solution and I wrote a function to
obtain the mode. That function is as follows.
mode <- function(data) {
# Function for mode estimation of a continuous variable
# Kernel density estimation by Ted Harding & Dougla
Toby Marthews wrote:
> Dear R-help,
>
> Here's a simple example of nonlinear curve fitting where nls seems to get
> the answer wrong on a very simple exponential fit (my R version 2.7.2).
>
> Look at this code below for a very basic curve fit using nls to fit to (a)
> a logarithmic and (b) an expon
Oh you are right Peter, thanks
On Fri, Aug 29, 2008 at 8:37 AM, Peter Dalgaard <[EMAIL PROTECTED]>wrote:
> Henrique Dallazuanna wrote:
> > Try:
> >
> > as.numeric(names(which.max(table(x
> >
> > On Fri, Aug 29, 2008 at 3:13 AM, Manuel Ramon <[EMAIL PROTECTED]> wrote:
> >
>
> You missed the wo
Henrique Dallazuanna wrote:
> Try:
>
> as.numeric(names(which.max(table(x
>
> On Fri, Aug 29, 2008 at 3:13 AM, Manuel Ramon <[EMAIL PROTECTED]> wrote:
>
You missed the word "continuous" there...
> x <- rnorm(10)
> table(x)
x
-1.64244637710945 -0.836534097622312 -0.810292826933485 -0.721008
Try:
as.numeric(names(which.max(table(x
On Fri, Aug 29, 2008 at 3:13 AM, Manuel Ramon <[EMAIL PROTECTED]> wrote:
>
> Is there any R funtion that allow the estimation of mode in a continuous
> variable?
> Thank you
> --
> View this message in context:
> http://www.nabble.com/how-to-calculate-
Van Patten, Isaac T wrote:
>
> Is there an R function to generate a radar or spider graph from a table
> - e.g.radar(table(x)) or some such?
>
And you may find this a useful site to bookmark...
http://addictedtor.free.fr/graphiques/
Neil
--
View this message in context:
http://www.n
Dear R-help,
Chi Square Test for Goodness of Fit
I have got a discrete data
as given below (R script)
No_of_Frauds<-c(1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,1,2,2,2,1,1,2,1,1,1,1,4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,5,1,2,1,1,1,1,1,1,1,3,2,1,1,1,2,1,1,2,1,1,1,1,1,
There was a very informative thread on this list only a week or so ago (I
started it!). If you're reading from a table or a csv file, see the colClasses
argument. Otherwise see ?as.numeric.
Robin Williams
Met Office summer intern - Health Forecasting
[EMAIL PROTECTED]
-Original Message
I am reading numeric data as below but the problem is the object
ndata1 and nd1 have characters instead of numeric values. I want to
keep it as numeric. Why the type has changed from numeric to
character and how to avoid this problem?
help(as.numeric)
Your variable rdata1 has probably chara
Dear R-help,
Chi Square Test for Goodness of Fit
I have got a discrete data
as given below (R script)
No_of_Frauds<-c(1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,1,2,2,2,1,1,2,1,1,1,1,4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,5,1,2,1,1,1,1,1,1,1,3,2,1,1,1,2,1,1,2
On my windows XP machine, if you uninstall R-2.7.1, the libraries which
doesn't come with the original installation remain in the folder C:\Program
Files\R\R-2.7.1\library.
So I first uninstall R-2.7.1, then install R-2.7.2, copy the remaining
packages from the R-2.7.1 folder to the R-2.7.2 librar
Chi Square Test for Goodness of Fit
Â
I have got a discrete data
as given below (R script)
Â
No_of_Frauds<-c(1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,1,2,2,2,1,1,2,1,1,1,1,4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,5,1,2,1,1,1,1,1,1,1,3,2,1,1,1,2,1,1,2,1,1,1,1,1,2,1,3,1,2,1,2,14,2
Dear R-help,
Here's a simple example of nonlinear curve fitting where nls seems to get
the answer wrong on a very simple exponential fit (my R version 2.7.2).
Look at this code below for a very basic curve fit using nls to fit to (a)
a logarithmic and (b) an exponential curve. I did the fits usin
Hi
Many thanks. I got another post to this list that gives an example of what
I need exactly, that is, require and ::
Cheers
Ed
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Ben Bolker
Sent: Thursday, August 28, 2008 11:44 PM
To: [EMAIL PROTECTED]
Su
Hi Steve
That is exactly what I want. Many thanks.
Ed
-Original Message-
From: Steven McKinney [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 28, 2008 11:48 PM
To: Eduardo M. A. M.Mendes; r-help@r-project.org
Subject: RE: [R] Newbie: Examples on functions callling a library etc.
Hi
D Chaws gmail.com> writes:
> Say, for instance you want to model growth in pituitary distance as a
> function of age in the Orthodont dataset.
>
> fm1 = lme(distance ~ I(age-8), random = ~ 1 + I(age-8) | Subject, data =
> Orthodont)
>
> You notice that there is substantial variability in the i
Hi,
as mentioned in my previous posting, I run R on a linux machine. So a
possible function for printing (in linux) could look like this:
copy2lpr<-function(..., PRINTER="lpr") {
LPR<-pipe(PRINTER,"w")
capture.output(..., file=LPR)
close(LPR)
}
This seems to work... An allows the user to c
Thanx guys,
practical and effective solutions.
Gianandrea
glaporta wrote:
>
> Hi,
> sqldf is a fantastic package, but when the SELECT procedure runs unused
> levels remain in the output. I tried with the drop function, but without
> success. Do you have any suggestions?
> Thanx, Gianandrea
>
>
In my situation under windows xp, after I setting the environment
variable R_LIBS, neither Rgui.exe nor Rterm.exe under cmd.exe doesn't
know the existence of R_LIBS.
After I enter the R interface, I find .libPaths() can add a new location
for installed packages:
> .libPaths("d:/progra~1/R/R
huang min gmail.com> writes:
> I would like to extract the variance components estimation in lme function
> like
>
> a.fit<-lme(distance~age, data=aaa, random=~day/subject)
>
Try
VarCor(a.fit)
Using lme for the type of problems you have is just fine; in many respects, lme4
currently is a wel
Is there any R funtion that allow the estimation of mode in a continuous
variable?
Thank you
--
View this message in context:
http://www.nabble.com/how-to-calculate-the-mode-of-a-continuous-variable-tp19214243p19214243.html
Sent from the R help mailing list archive at Nabble.com.
__
Dear R users...
I made the R-code for this double summation computation
http://www.nabble.com/file/p19213599/doublesum.jpg
-
Here is my code..
sum(sapply(1:m, function(k){sum(sapply(1:m,
function(j){x[k]*x[j]*dnorm((mu[j]+mu[k])/sqrt(sig[k]+si
Dear R users...
I made the R-code for this double summation computation
http://www.nabble.com/file/p19213463/doublesum.jpg
-
Here is my code..
sum(sapply(1:m, function(k){sum(sapply(1:m,
function(j){x[k]*x[j]*dnorm((mu[j]+mu[k])/sqrt(sig[k]+sig
79 matches
Mail list logo