P.S. pgfSweave does include "caching" of figures as a feature. See the
vignette for details.
--
View this message in context:
http://www.nabble.com/I-want-to-use-Sweave%2C-but-only-sometimes-tp23026260p23033731.html
Sent from the R help mailing list archive at Nabble.com.
cls59 wrote:
>
> Currently pgfSweave is only available from rforge.net and is very much in
> beta.
>
Well, that said, you can install the latest build using:
install.packages("pgfSweave", repos="http://R-Forge.R-project.org";)
then for the documentation:
?pgfSweave
AND
vignette("pgf
Hi there,
I got a data frame with more than 200k columns. How could I get median of each
column fast? mapply is the fastest function I know for that, it's not yet
satisfied though.
It seems function "median" in R calculates median by "sort" and "mean". I am
wondering if there is another funct
Hey Thomas Thanks mate
I knew there had to be an easy answer. Thanks for coming to my rescue.
Regards
Andrew McFadden MVS BVSc | Incursion Investigator (animals),
Investigation and Diagnostic Centre | Biosecurity New Zealand
Ministry of Agriculture and Forestry | 66 Ward St, Wallaceville | P
On Tue, 14 Apr 2009, Andrew McFadden wrote:
Hi all
I know this must be an easy one so sorry for the trouble. I would like
to select a list of variables within a factor
The following example is given in help for subset:
subset(airquality, Temp > 80 select = c(Ozone, Temp))
So how do I select
Hi all
I know this must be an easy one so sorry for the trouble. I would like
to select a list of variables within a factor
The following example is given in help for subset:
subset(airquality, Temp > 80 select = c(Ozone, Temp))
So how do I select all temperatures of 90 and 80 ie Temp = c(80,9
Dear R People:
At one time, there were packages called fractal and Fractal,
respectively, which had different functions.
I can't seem to find Fractal any more. Does it still exist somewhere, please?
Thanks in advance for any help!
Sincerely,
Erin
--
Erin Hodgess
Associate Professor
Departme
Note that that output of by() is a matrix, but with some extra
attributes added to it.
Since you didn't supply any data I made up some that might
resemble yours.
> set.seed(1)
> re<-list(meta.sales.lkm=data.frame(pc=runif(40), sales=rpois(40,3),
size=sample(c("small","medium","large"),size=40
Hi R-users,
I would like to use jacobian function from numDeriv package. If I have more
than one parameters how do I modify it?
This is the example given in the package:
func2 <- function(x) c(sin(x), cos(x))
x <- (0:1)*2*pi
jacobian(func2, x)
Can I do the following:
z <- c(x,y)
func
Also see the help file for the wtd.mean function in the Hmisc package,
which has an example using the summarize function to do this.
Frank
Mike Lawrence wrote:
Sounds like a job for plyr: http://had.co.nz/plyr
On Mon, Apr 13, 2009 at 7:56 PM, Dong H. Oh wrote:
Hi expeRts,
I would like to
Paul Johnson-11 wrote:
>
> Does anybody have a workable system to run an Rnw document through
> R-Sweave when necessary, but to just run it through LaTeX if no new R
> calculations are needed? I.e., the figures already exist, I do not
> need R to do more work for me, so I send the document stra
It would be nice if you provided some sample data. You can change the
date variable to a common year (subtract the difference) and then plot
on a common axis.
On Mon, Apr 13, 2009 at 5:34 PM, Max Rausch wrote:
> Hello,
>
> I have four different time series from the same time period within a year
Sounds like a job for plyr: http://had.co.nz/plyr
On Mon, Apr 13, 2009 at 7:56 PM, Dong H. Oh wrote:
> Hi expeRts,
>
> I would like to calculate weighted mean by two factors.
>
> My code is as follows:
>
> R> tmp <- by(re$meta.sales.lkm[, c("pc", "sales")],
> re$meta.sales.l
Hi expeRts,
I would like to calculate weighted mean by two factors.
My code is as follows:
R> tmp <- by(re$meta.sales.lkm[, c("pc", "sales")],
re$meta.sales.lkm[, c("size", "yr")], function(x)
weighted.mean(x[,1], x[,2]))
The result is as follows:
R>
Apologies: that should have been sum(residual^2)!
> -Original Message-
> From: Dimitri Liakhovitski [mailto:ld7...@gmail.com]
> Sent: Monday, April 13, 2009 4:35 PM
> To: Liaw, Andy
> Cc: R-Help List
> Subject: Re: [R] Random Forests: Question about R^2
>
> Andy,
> thank you very much!
Hello,
I have four different time series from the same time period within a
year (February through August) but they are from different years. I can
not figure out how to plot them all (so they will be overlayed instead
of plotted on after another) on the same graph with the x-axis
correspond
Hello,
I can not figure out how to run R using an XTerm external console on a
mac running OS X. I have spent around 8 hours trying to find any
tutorial or step by step guide with no success.
Please if anyone has been able to successfully use this functionality
please help me out with some cl
Thanks Tirthankar, that did the trick.
Here's the solution to my problem using the "bivpois" package:
rm(list = ls())
library(bivpois)
y1 = c(1,2,3,4,4,3)
y2 = c(0,2,0,2,3,5)
x1 = c(2,3,4,8,1,3)
x2 = c(3,5,6,7,8,9)
d = data.frame(cbind(y1, y2, x))
eq1 = y1 ~ x1 + x2
eq2 = y2 ~ x1 + x2
out = lm.
Dear R community,
I have some questions regarding the analysis of a zero-inflated count dataset
and repeated measures design.
The dataset is arranged as follows :
Unit of analysis: point - these are points were bird were counted during a
certain amount of time. In total we have about 175 points
Andy,
thank you very much!
One clarification question:
If MSE = sum(residuals) / n, then
in the formula (1 - mse / Var(y)) - shouldn't one square mse before
dividing by variance?
Dimitri
On Mon, Apr 13, 2009 at 10:52 AM, Liaw, Andy wrote:
> MSE is the mean squared residuals. For the training
On 12 April 2009 at 21:00, Peter Kraglund Jacobsen wrote:
> One variable contains values (1.30 - one hour and thirty minutes, 1.2
> (which is supposed to be 1.20 - one hour and twenty minutes)). I would
> like to convert to a minute variable so 1.2 is converted to 80
> minutes. How?
You could make
>>> 04/13/09 4:01 AM >>>
>The UK uses metres for most lengths but miles for road distances -
>the worst of all worlds. They even measure fuel performance in
>litres per 100 *miles*, if you can believe it.
No, we don't. We use miles per (Imperial) gallon.
Only Euronuts would combine litres and
Another issue with units is 'recognising' things. For example, Hz are
also s^-1 ...
S
>>> Stavros Macrakis 04/13/09 9:38 AM >>>
On Sun, Apr 12, 2009 at 11:01 PM, wrote:
> It is, however, an interesting problem and there are the tools there
to handle it. Basically you need to create a class fo
try this:
> x
V1 V2 V3
1 500 320 0
2 510 310 0
3 520 310 0
4 520 320 0
> y
V1 V2 V3
1 500 320 1
2 500 320 1
3 520 310 1
4 520 300 1
> z <- merge(x, y, by=c("V1", "V2"), all.x=TRUE)
> t(sapply(split(z, z
Add this to your startup so that all errors are caught and the
statement and calling stack are printed out. This aids a lot in
debugging, especially when using scripts:
options(error=utils::recover)
On Mon, Apr 13, 2009 at 3:09 PM, SHANE MILLER, BLOOMBERG/ 731 LEXIN
wrote:
>> source("C:\\Docum
Try
source("...whatever...", echo = TRUE, max = )
On Mon, Apr 13, 2009 at 3:09 PM, SHANE MILLER, BLOOMBERG/ 731 LEXIN
wrote:
>> source("C:\\Documents and Settings\\smiller53\\Desktop\\perf.r")
> Error in eval.with.vis(expr, envir, enclos) : element 1 is empty;
> the part of the args list
> source("C:\\Documents and Settings\\smiller53\\Desktop\\perf.r")
Error in eval.with.vis(expr, envir, enclos) : element 1 is empty;
the part of the args list of 'c' being evaluated was:
perf.r is a long script. how can I determine which line the error occurred on?
TIA, SHane
_
this is what i needed! thank you.
> -Original Message-
> From: Jorge Ivan Velez [mailto:jorgeivanve...@gmail.com]
> Sent: Monday, April 13, 2009 12:50 PM
> To: Dan Dube
> Cc: r-help@r-project.org
> Subject: Re: [R] tapply output as a dataframe
>
>
> Dear Dan,
>
> Try this:
>
> do.ca
On 4/13/2009 2:07 PM, Paul Johnson wrote:
Does anybody have a workable system to run an Rnw document through
R-Sweave when necessary, but to just run it through LaTeX if no new R
calculations are needed? I.e., the figures already exist, I do not
need R to do more work for me, so I send the docum
Does anybody have a workable system to run an Rnw document through
R-Sweave when necessary, but to just run it through LaTeX if no new R
calculations are needed? I.e., the figures already exist, I do not
need R to do more work for me, so I send the document straight to
LaTeX.
I want to leave open
Mitchell Maltenfort wrote:
Just got what seems to be the first spam for a stats book I ever saw.
New tricks from CRC marketing? They are known to be among the more
aggressive players in the market place. (Don't get me wrong, some rather
nice people have been publishing with them.)
At least
On Mon, Apr 13, 2009 at 5:15 AM, Peter Dalgaard
wrote:
> Stavros Macrakis wrote:
>> ...c of two time differences is currently a numeric vector,
>> losing its units (hours, days, etc.) completely.
>
> That's actually a generic feature/issue of c(). ...
> There is some potential for redesigning thi
> From: Tan, Richard [mailto:r...@panagora.com]
> Sent: Monday, April 13, 2009 10:23 AM
> To: William Dunlap
> Cc: r-help@r-project.org
> Subject: RE: [R] toupper does not work in sub + regex
>
> Thanks, Bill! One more question, how do I get SviRaw, i.e., just
> uppercase the 1st char and keep e
Dear Rajat,
Just change this
data82$contact <- factor(contact,labels=c("low","high"))
for this
data82$contact <- factor(data82$contact,labels=c("low","high"))
HTH,
Jorge
On Mon, Apr 13, 2009 at 12:24 PM, Rajat wrote:
> I am solving the following question. I want to label the data. I used
Thanks, Bill! One more question, how do I get SviRaw, i.e., just
uppercase the 1st char and keep everything else the same?
sub("q_([a-z])([a-zA-Z]*)", "\\U\\1 \\2", "q_sviRaw",perl=TRUE)
Did not work.
Thank you!
Richard
-Original Message-
From: William Dunlap [mailto:wdun...@tibco.c
Have a look at function rcor.test() from package ltm, e.g.,
library(ltm)
mat <- matrix(rnorm(1000), 100, 10, dimnames = list(NULL, LETTERS[1:10]))
rcor.test(mat)
rcor.test(mat, method = "kendall")
rcor.test(mat, method = "spearman")
I hope it helps.
Best,
Dimitris
>
> Hello,
> I have a data f
You could also use \\U and \\L in the replacement
with perl=TRUE. \\U "converts the rest of the replacement
to upper case" and \\L converts to lowercase. (By
"replacement" it means the parts of the replacement
that arise from parenthesized subpatterns in the pattern
argument, not the replacement a
Hi R-users,
I create a simple code to check out how often the same numbers in y occur in
x. For example 500 32 occurs two times.
But the code with the loop is extremly slow. x have 6100 lines and y
sometimes more than 5 lines.
Is there any alternative code to create with R?
thanks.
> do.call(rbind,a)
[,1] [,2] [,3] [,4]
1 -0.7871502 -0.4437714 0.4011135 -0.2626129
2 -0.9546515 0.2210001 0.816 0.1245766
3 -0.5389725 -0.2750984 0.6655951 -0.1873485
4 -0.8176898 -0.1844181 0.4737187 -0.2688996
On Mon, Apr 13, 2009 at 12:41 PM, Dan Dube wrote:
>
Dear Dan,
Try this:
do.call(rbind,a)
HTH,
Jorge
On Mon, Apr 13, 2009 at 12:41 PM, Dan Dube wrote:
> i use tapply and by often, but i always end up banging my head against
> the wall with the output.
>
> is there a simpler way to convert the output of the following tapply to
> a dataframe or
Hi Thanks a lot,
I think you have covered the things I want to do for now so I will try to
implement them as soon I can.
<< A finite Fourier series could be the best tool IF the the multiple
periodicities are all integer fractions of a common scale.>>
This is certainly true for my repetitive
i use tapply and by often, but i always end up banging my head against
the wall with the output.
is there a simpler way to convert the output of the following tapply to
a dataframe or matrix than what i have here:
# setup data for tapply
dt = data.frame(bucket=rep(1:4,25),val=rnorm(100))
fn = f
Just got what seems to be the first spam for a stats book I ever saw.
It only has one review at amazon, and for all I know the author wrote
it himself.
This might be a decent text, but if so, I'd like to hear it from someone here.
-- Forwarded message --
From: Biostatistics Boo
Now that I have the markers the weight I want using lex, I'm having
trouble making the key
match the markers. Any suggestions? BTW, I'm using R2.8.1 with Windows
Vista.
Naomi
--
Naomi B. Robbins
NBR
11 Christine Court
Wayne, NJ 07470
Phone: (973) 694-6009
na...@nbr-graphs.com
http:/
On 4/13/2009 10:56 AM, thoeb wrote:
> Hello,
> I have a data frame containing several parameters. I want to investigate
> pair wise correlations between all of the parameters. For doing so I used
> the command cor(data.frame, method=”spearman”), the result is a matrix
> giving me the correlation c
sub only handles replacement strings, not replacement functions.
Your code is the same as:
sub("q_([a-z])[a-zA-Z]*", '\\1', "q_sviRaw")
since toupper('\\1') has no alphabetics so its just literally '\\1' and
the latter is what sub uses.
The gsubfn function in the gsubfn package can deal with rep
I am solving the following question. I want to label the data. I used the
following code.
> data82 <- read.table(file="/home/rajat/R/8_2_rtg.txt",header=T)
> data82 <- data.frame(data82)
> data82
low_sat med_sat high_sat contact housing
1 65 54 100 1 1
2 130
quantile( dsamp100, 0.05 )
On Mon, Apr 13, 2009 at 10:41 AM, Henry Cooper wrote:
> dsamp100<-coef(100,39.83,5739,2869.1,49.44)
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www
Thanks, Martin. I did not realize that. I never used perl compatible
regex before but seems now I should!
Richard
-Original Message-
From: Martin Morgan [mailto:mtmor...@fhcrc.org]
Sent: Monday, April 13, 2009 12:08 PM
To: Tan, Richard
Subject: Re: [R] toupper does not work in sub + re
On Apr 13, 2009, at 10:54 AM, Brendan Morse wrote:
Hi everyone, I am having a bit of trouble correctly structuring an
equation in R.
Here is the equation
It's really a series of assignments.
Here is what I thought
for(i in 1:numItem)for(x in 1:numCat)
Ptheta[,i,x]<-(exp(-1.70
Hi,
on response to the thread of february, I recently uploaded the package
alphahull, that computes the alpha-shape of a given sample of points
in the plane.
Regrets,
Bea
__
Beatriz Pateiro López
Departamento de Estatística e IO
Universidad de Santiago de
Hello all.
I am looking to perform the: "Post Hoc Pair-Wise Comparisons for the
Chi-Square Test of Homogeneity of Proportions" (or an equivalent of it)
Which is described also here:
http://epm.sagepub.com/cgi/content/abstract/53/4/951
My situation is of just making a chi test, on a 2 by X matrix.
Hi, I don't know what I am doing wrong to the toupper does not seem
working in sub + regex. The following returns 's' not the upper class
'S' as I expect:
sub("q_([a-z])[a-zA-Z]*",toupper('\\1'),"q_sviRaw")
Can someone tell me where I did wrong?
Thanks,
Richard
[[alternative HTML ve
Assuming DF is your data frame try this: ftable(DF)
In SQL you can get close with:
sqldf("select X1, X2, X3, sum(X4 == 1) `X4=1`, sum(X4 == 2) `X4=2`
from DF group by X1, X2, X3 order by X1, X2, X3")
On Mon, Apr 13, 2009 at 9:56 AM, Nick Angelou wrote:
>
>
> Gabor Grothendieck wrote:
>>
>> SQL
Brendan
It looks like you're working with the 2PL, what are you trying to
estimate exactly? There are a lot of built in psychometric functions
that you might consider using.
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of B
Hello,
I have a data frame containing several parameters. I want to investigate
pair wise correlations between all of the parameters. For doing so I used
the command cor(data.frame, method=”spearman”), the result is a matrix
giving me the correlation coefficients of each pair, but not the p-value
Hi everyone, I am having a bit of trouble correctly structuring an
equation in R.
Here is the equation
Here is what I thought
for(i in 1:numItem)for(x in 1:numCat)
Ptheta[,i,x]<-(exp(-1.702*a[i]*(theta-b[i,x+1]))
My problem is that I am not sure how to get it to read the equati
Gabor Grothendieck wrote:
>
> SQL has the order by clause.
>
Gabor, thanks for the suggestion. I thought about this but ORDER BY cannot
create the tabular structure that I need. Here is more detail about my
setting:
f1, f2, f3 have unique triplets (each repeating a different number of
times).
I have run the below function..
coef<-function(N,theta,k,alpha,lamda){
omega<-rgamma(N,alpha,lamda)
theta<-rnorm(N,theta,1/(k*omega))
coeff<-1/(sqrt(omega)*theta)
return(coeff)
}
I have been told I have to calculate the fifth percentile for :
dsamp100<-coef(100,39.83,5739,28
MSE is the mean squared residuals. For the training data, the OOB
estimate is used (i.e., residual = data - OOB prediction, MSE =
sum(residuals) / n, OOB prediction is the mean of predictions from all
trees for which the case is OOB). It is _not_ the average OOB MSE of
trees in the forest.
I hop
I really don't understand what you don't understand. Do you know how a
tree forms a prediction? If not, it may be a good idea to learn about
that first. The code runs prediction of each case through all trees in
the forest and that's how the votes are formed.
[For OOB predictions, only predi
But how does it estimate that voting output? How does it get the 85.7% for
all the trees?
Regarding the prediction accuracy. If I have OOB error = 2.34, then the
prediction accuracy will be equal to 76.6%, right?
Many thanks,
Chrysanthi.
2009/4/13 Liaw, Andy
> RF forms prediction by voting.
FAQ 7.11:
http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-do-I-convert-factors-to-numeric_003f
On Apr 13, 2009, at 7:01 AM, joewest wrote:
Hi
I am really struggling with changing my factors into continuous
variables.
There is plenty of information on changing continuous to a factor
whic
On Apr 13, 2009, at 7:26 AM, Nick Angelou wrote:
Thanks a lot, guys. Gabor's and Mike's suggestion worked. Duncan's
did not do
exactly what I expected (I guess it's the "paste" in Mike's that makes
"table" work as I needed it).
One more question - is there a convenient way to order the gro
RF forms prediction by voting. Note that each row in the output sums to
1. It says 85.7% of the trees classified the first case as "healthy"
and the other 14.3% of the trees "unhealthy". The majority (in
two-class cases like this one) wins, so the prediction is "healthy".
You can take 1 - OOB
do you mean something like this:
f1 <- gl(2, 2, length = 15)
f2 <- gl(3, 2, length = 15)
f3 <- gl(4, 2, length = 15)
f4 <- gl(5, 2, length = 15)
fcomb <- interaction(f1, f2, f3)
table(fcomb, f4)
I hope it helps.
Best,
Dimitris
Nigel Birney wrote:
Hi, I have a dynamic table structure consis
Try this:
> ftable(CO2[1:3])
Treatment nonchilled chilled
Plant Type
Qn1 Quebec 7 0
Mississippi0 0
Qn2 Quebec 7 0
Mississippi0 0
Qn3 Quebec
The R News article we put out after the first version of the package was
released has examples of doing CV. You can also use the facilities in the
caret package (on CRAN) or the MLInterface package (part of Bioconductor, not
on CRAN).
randomForest() itself does not do CV per se, but the OOB es
I'll take a shot.
Let me try to explain the 3rd measure first. A RF model tries to predict an
outcome variable (the classes) from a group of potential predictor variables
(the "x"). If a predictor variable is "important" in making the prediction
accurate, then by messing with it (e.g., giving
Hi, I have a dynamic table structure consisting of N factors (e.g. columns
Factor_1, Factor_2,..Factor_N).
I want to construct a table that has as rows the level combinations of
factors F_1..F_n-1, and as columns the levels of the factor Fn. The cells of
the table would contain the number of cas
SQL has the order by clause.
On Mon, Apr 13, 2009 at 7:26 AM, Nick Angelou wrote:
>
> Thanks a lot, guys. Gabor's and Mike's suggestion worked. Duncan's did not do
> exactly what I expected (I guess it's the "paste" in Mike's that makes
> "table" work as I needed it).
>
> One more question - is t
jjh21 wrote:
Hi,
I am trying to figure out exactly what the bootcov() function in the Design
package is doing within the context of clustered data. From reading the
documentation/source code it appears that using bootcov() with the cluster
argument constructs standard errors by resampling whole
check R FAQ 7.10
Best,
Dimitris
joewest wrote:
Hi
I am really struggling with changing my factors into continuous variables.
There is plenty of information on changing continuous to a factor which is
the opposite of what i need to do. I would be soo grateful for any help
Thanks
Joe
Dear Experts---Sorry, I need some help again. I need a very fast
estimator for small sample time-series in which the autocoefficient
can be anything between 0 and 2 (i.e., even beyond the unit-root). I
think this means that I will need to run OLS. Of course, this means
that I will run into the H
On Mon, Apr 13, 2009 at 4:15 AM, Peter Dalgaard
wrote:
> Stavros Macrakis wrote:
>
>> It would of course be nice if the existing difftime class could be fit
>> into this, as it is currently pretty much a second-class citizen. For
>> example, c of two time differences is currently a numeric vector
Thanks a lot, guys. Gabor's and Mike's suggestion worked. Duncan's did not do
exactly what I expected (I guess it's the "paste" in Mike's that makes
"table" work as I needed it).
One more question - is there a convenient way to order the group by results
as follows:
As rows: the unique combinati
Hi
I am really struggling with changing my factors into continuous variables.
There is plenty of information on changing continuous to a factor which is
the opposite of what i need to do. I would be soo grateful for any help
Thanks
Joe
--
View this message in context:
http://www.nabble.
Mike Lawrence wrote:
One way:
g= paste(f1,f2,f3,f4)
table(g)
I'd go for
g <- interaction(f1,f2,f3,f4, drop=TRUE)
table(g)
which is essentially the same thing.
On Mon, Apr 13, 2009 at 7:33 AM, Nick Angelou wrote:
Hi,
I have the following table data:
f1, f2, f3, f4.
I want to compute t
?try
On Mon, Apr 13, 2009 at 5:14 AM, Andreas Wittmann
wrote:
> Dear R useRs,
>
> after searching r-help and r-manuals for about one hour i have the
> following, probably easy question for you.
>
> i have the following R-code, in the file test01.R
>
> #
One way:
g= paste(f1,f2,f3,f4)
table(g)
On Mon, Apr 13, 2009 at 7:33 AM, Nick Angelou wrote:
>
> Hi,
>
> I have the following table data:
>
> f1, f2, f3, f4.
>
> I want to compute the counts of unique combinations of f1-f4. In SQL I would
> just write:
>
> SELECT COUNT(*) FROM GROUP BY f1, f2,
You can use SQL commands directly on R data frames with the R sqldf package:
See home page:
http://sqldf.googlecode.com
On Mon, Apr 13, 2009 at 6:33 AM, Nick Angelou wrote:
>
> Hi,
>
> I have the following table data:
>
> f1, f2, f3, f4.
>
> I want to compute the counts of unique combinations of
Nick Angelou wrote:
Hi,
I have the following table data:
f1, f2, f3, f4.
I want to compute the counts of unique combinations of f1-f4. In SQL I would
just write:
SELECT COUNT(*) FROM GROUP BY f1, f2, ..,f4.
How to do this in R?
table(f1,f2,f3,f4) will give you the counts.
Other statistic
thank you so much!
this solved my problem
unbekannt wrote:
>
>
> Dear all,
>
> I am a newbie to R and practising at the moment.
>
> Here is my problem:
>
> I have a programme with 2 loops involved.
> The inner loop get me matrices as output and safes all values for me.
>
> Now once
Hi,
I have the following table data:
f1, f2, f3, f4.
I want to compute the counts of unique combinations of f1-f4. In SQL I would
just write:
SELECT COUNT(*) FROM GROUP BY f1, f2, ..,f4.
How to do this in R?
Thanks,
Nick
--
View this message in context:
http://www.nabble.com/Group-by-in-
Hi,
Two thoughts I'd like to share on this subject:
1) Something really cool for conversions between units is the Google
search bar: type in " 3 inches in cm" and you get,
3 inches = 7.62 centimeters
or, " 3 £ in dollar",
3 UK£ = 4.4007 U.S. dollars
or "12 cubic meters to pi
On Mon, Apr 13, 2009 at 7:42 AM, Henry Cooper wrote:
> As part of an R code assingment I have been asked to find a quantitative
> procedure for assessing whether or not the data are normal?
>
> I have previously used the graphical procedure using the qqnorm command.
>
> Any help/tips would be gre
Dear R useRs,
after searching r-help and r-manuals for about one hour i have the
following, probably easy question for you.
i have the following R-code, in the file test01.R
`fun1` <- function(x)
{
x <- x + 2
Stavros Macrakis wrote:
It would of course be nice if the existing difftime class could be fit
into this, as it is currently pretty much a second-class citizen. For
example, c of two time differences is currently a numeric vector,
losing its units (hours, days, etc.) completely.
That's actual
Hello,
In the past I have used intensively the mtrace function from the debug
package, but now with my actual version of R(2.8.1) , it is impossible
to use it anymore.
I've updated all my packages, and I don't understand how solve this
problem...
Here is an example code :
> foo<-function(){
I am trying to use the random forests package for classification in R.
The Variable Importance Measures listed are:
-mean raw importance score of variable x for class 0
-mean raw importance score of variable x for class 1
-MeanDecreaseAccuracy
-MeanDecreaseGini
Now I know what these "mean" as
You should probably try the -bivpois- package:
http://cran.r-project.org/web/packages/bivpois/index.html
A very good discussion of multivariate Poissons, negative binomials
etc. can be found in Chapter 7 of Rainer Winkelmann's book
"Econometric Analysis of Count Data" (Springer 2008). Most of the
On Sun, Apr 12, 2009 at 11:01 PM, wrote:
> It is, however, an interesting problem and there are the tools there to
> handle it. Basically you need to create a class for each kind of measure you
> want to handle ("length", "area", "volume", "weight", and so on) and then
> overload the arithmet
Dear list members,
Is there a package somewhere for jointly estimating two poisson processes?
I think the closest I've come is using the "SUR" option in the Zelig
package (see below), but when I try the "poisson" option instead of
the "SUR" optioin I get an error (error given below, and indeed,
r
93 matches
Mail list logo