Use & instead of &&
--Ista
On Sep 9, 2016 8:12 AM, "Matti Viljamaa" wrote:
> I’m getting strange behaviour when trying to extract rows from a
> two-column data.frame with double values.
>
> My data looks like:
>
>mom_iq kid_score
> 1 121.1175065
> 289.3618898
> 3
I’m getting strange behaviour when trying to extract rows from a two-column
data.frame with double values.
My data looks like:
mom_iq kid_score
1 121.1175065
289.3618898
3 115.4432085
499.4496483
…
and I’m testing extracting rows that have mom_
Thank you so much for your explanation.
I might be in a trouble again with processing log-likelihood analysis.
If it happens, may I ask your instuctions next time?
--
View this message in context:
http://r.789695.n4.nabble.com/Comparison-of-two-weibull-distributions-tp4679632p4679680.html
Sen
kmmoon100 student.unimelb.edu.au> writes:
>
> Hello,
>
> How can I do a test of two weibull distributions?
> I have two weibull distribution sets from two wind datasets in order to
> check whether they are same.
> I thought 2 sample t-test would be applicable but I couldn't find any ways
> to d
Hello,
How can I do a test of two weibull distributions?
I have two weibull distribution sets from two wind datasets in order to
check whether they are same.
I thought 2 sample t-test would be applicable but I couldn't find any ways
to do that on the Internet.
Does anyone know what type of test is
Thank you so much.
You are the BEST:-)
Now, I'm gonna learn all these codes and understand.
Cc: R help
Sent: Saturday, April 13, 2013 5:18 PM
Subject: Re: Comparison of Date format
Hi,
DataA<- structure(list(ID = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
Hi,
DataA<- structure(list(ID = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 2L, 2L, 3L, 3L, 3L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L,
8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L,
8L, 8L, 8L, 8L), Status = c("A", "B", "A", "B", "A", "B", "A",
"A", "B", "B", "A", "A", "A",
HI,
In cases like below:
DataA<- read.table(text="
ID,Status,Date1,Date2
1,A,3-Feb-01,15-May-01
1,B,15-May-01,16-May-01
1,A,16-May-01,3-Sep-01
1,B,3-Sep-01,13-Sep-01
1,C,13-Sep-01,26-Feb-04
2,A,9-Feb-01
Hi,
In the example you provided, it looks like the dates in Date2 happens first.
So, I changed it a bit.
DataA<- read.table(text="
ID,Status,Date1,Date2
1,A,3-Feb-01,15-May-01
1,B,15-May-01,16-May-01
1,A,16-May-01,3-Sep-01
1,B,3-Sep-01,
t.org
>Subject: Re: [R] comparison of large data set
>
>Irucka,
>
>
>You could assign names to the compare.all list for example ...
>
> names(compare.all) <- paste0("Obs", 1:54)
>
>Then, when you create the subset list, justbig, it will have the
appropr
t;
>
>
> <-Original Message->
> >From: Adams, Jean [jvad...@usgs.gov]
> >Sent: 12/21/2012 1:25:24 PM
> >To: iruc...@mail2world.com
> >Cc: r-help@r-project.org
> >Subject: Re: [R] comparison of large data set
> >
> >Irucka,
> >
&g
which site locations are
successful (> 0.7) and which ones are not (< 0.7).
Thank-you Jean.
Irucka
<-Original Message->
>From: Adams, Jean [jvad...@usgs.gov]
>Sent: 12/21/2012 1:25:24 PM
>To: iruc...@mail2world.com
>Cc: r-help@r-project.org
>Subject: Re: [R] c
; justbig <- modeldepths[[compare.all > 0.7]]
> Error in modeldepths[[compare.all > 0.7]] :
> recursive indexing failed at level 2
>
> Once again, thank-you for your assistance.
>
> Irucka Embry
>
>
> <-Original Message->
> >From: Adams, Jean [
Irucka,
I did not test this code out on any data, but I think it will work.
Jean
# a function to read in the data as a matrix of logicals
myreadfun <- function(file) {
as.matrix(read.ascii.grid(file)$data)!=0
}
# names of the 54 modeled depth files
modfiles <- paste0("MaxFloodDepth_", 1:54, ".
Hi, how are you?
I have the following truncated R code:
fileobs <- "MaxFloodDepth_Observed.txt"
file1 <- "MaxFloodDepth_1.txt"
file2 <- "MaxFloodDepth_2.txt"
...
file54 <- "MaxFloodDepth_54.txt"
observeddepth = as.matrix(read.ascii.grid(fileobs)$data)
observeddepth[observeddepth!=0]<-1
model
Hello Petr and thanks for your help! Thanks also for the correction on the
code, of cause it is better to use the real mean and covariance than those
estimated by mean() and cov(). What I am after is that if I have the two
two-dimensional probability density functions of the distribution of my
On Wed, Apr 25, 2012 at 08:43:34PM +, Fabian Roger wrote:
> sorry for cross-posting
>
> Dear all,
>
> I have tow (several) bivariate distributions with a known mean and
> variance-covariance structure (hence a known density function) that I would
> like to compare in order to get an interse
If you read the Posting Guide, you will see that cross-posting is
deprecated on r-help ...(although not explicitly so on StackOverflow.)
On Apr 25, 2012, at 4:43 PM, Fabian Roger wrote:
sorry for cross-posting
Dear all,
I have tow (several) bivariate distributions with a known mean and
v
sorry for cross-posting
Dear all,
I have tow (several) bivariate distributions with a known mean and
variance-covariance structure (hence a known density function) that I would
like to compare in order to get an intersect that tells me something about "how
different" these distributions are (a
Dear List Members,
i am looking for a statistical method or test wich helps me to verify the
equality of two stochastic matrices (the sums in the rows = 1). Could you
help me?
Thanks!
regards,
galla
--
View this message in context:
http://r.789695.n4.nabble.com/comparison-of-stochastic-matrices
Ah, yes I see. Thanks John and Michael.
On Tue, Jan 3, 2012 at 6:06 PM, R. Michael Weylandt
wrote:
> There's a coercion to character implicit (i.e., the number 999 gets
> converted to the string "999") and then comparison is done in lexical
> (dictionary) order in which digits are lower than char
There's a coercion to character implicit (i.e., the number 999 gets
converted to the string "999") and then comparison is done in lexical
(dictionary) order in which digits are lower than characters.
You'll also note you get apparently strange behavior like "34" < "9"
if you don't think about thin
Dear Ista,
This is a consequence of coercion of the numbers to character:
> c("Z", "a", 999, Inf)
[1] "Z" "a" "999" "Inf"
> sort(c("Z", "a", 999, Inf))
[1] "999" "a" "Inf" "Z"
I hope this helps,
John
On Tue, 3 Jan 2012 17:56:29 -0500
Ista Zahn wrote:
> Hi all,
>
> I just discovered
Hi all,
I just discovered that R considers characters to be really big:
> "a" > 999
[1] TRUE
> "a" > 9e307
[1] TRUE
> "a" > 9e308
[1] FALSE
and that some characters are literally infinitely big:
> "Z" >= Inf
[1] TRUE
although not all:
> "a" > Inf
[1] FALSE
This came as a surprise to me (alt
On Fri, Aug 19, 2011 at 7:25 AM, Simon Kiss wrote:
> Dear list colleagues,
> I'm trying to come up with a test question for undergraduates to illustrate
> comparison of means from a complex survey design. The data for the example
> looks roughly like this:
>
> mytest<-data.frame(harper=rnorm(500
Dear list colleagues,
I'm trying to come up with a test question for undergraduates to illustrate
comparison of means from a complex survey design. The data for the example
looks roughly like this:
mytest<-data.frame(harper=rnorm(500, mean=60, sd=1), party=sample(c("BQ",
"NDP", "Conservative",
Hello!
I have faced a problem in nlme-environment. My intention is to fit a
penalized spline model in mixed model framework. I want make a
comparison in smooth curves between two groups but for some reason I
get NaN in output..
Hereis the R-code I have used.
#Z.overall is for truncated
Dear Pert,
Many thanks to your reply. Fully you are right!
Best wishes,
Helin.
--
View this message in context:
http://r.789695.n4.nabble.com/Comparison-of-the-amount-of-computation-tp3448436p3449722.html
Sent from the R help mailing list archive at Nabble.com.
___
On Thu, Apr 14, 2011 at 12:40:53AM -0700, helin_susam wrote:
> Hi Petr,
>
> Your idea looks like logically. So, can we say this with your idea; the
> expected number of computation in unique(sample(...)) is fewer than
> sample(...). Because, the expected length is 63.39677 in unique case, while
>
Hi Petr,
Your idea looks like logically. So, can we say this with your idea; the
expected number of computation in unique(sample(...)) is fewer than
sample(...). Because, the expected length is 63.39677 in unique case, while
the expected length is 100 in non-unique case ?
Thanks for reply,
Helin
On Wed, Apr 13, 2011 at 04:12:39PM -0700, helin_susam wrote:
> Hi dear list,
>
> I want to compare the amount of computation of two functions. For example,
> by using this algorithm;
>
> data <- rnorm(n=100, mean=10, sd=3)
>
> output1 <- list ()
> for(i in 1:100) {
> data1 <- sample(100, 100, re
Hi dear list,
I want to compare the amount of computation of two functions. For example,
by using this algorithm;
data <- rnorm(n=100, mean=10, sd=3)
output1 <- list ()
for(i in 1:100) {
data1 <- sample(100, 100, replace = TRUE)
statistic1 <- mean(data1)
output1 <- c(output1, list(statistic1))
}
Thank you very much for the answer, it helped me a lot!
Regards,
Sabine Woschitz
Original-Nachricht
> Datum: Sat, 12 Feb 2011 01:04:46 -0800 (PST)
> Von: "Matthieu Lesnoff [via R]"
>
> An: sabwo
> Betreff: Re: Comparison of glm.nb and negbin from the package aod
>
>
> D
Dear Sabine
In negbin(aod), the deviance is calculated by:
# full model
logL.max <- sum(dpois(x = y, lambda = y, log = TRUE))
# fitted model
logL <- -res$value
dev <- -2 * (logL - logL.max)
(the log-Lik contain all the constants)
As Ben Bolker said, whatever the formula used for deviance, diff
sabwo gmx.at> writes:
[big snip; comparing aod::negbin and MASS::glm.nb fits]
> The thing i really dont understand is why there is such a big difference
> between the deviances? (glm.nb = 30.67 and negbin=52.09?) Shouldnt they be
> nearly the same??
>
I don't have time to dig into this right
I have fitted the faults.data to glm.nb and to the function negbin from the
package aod. The output of both is the following:
summary(glm.nb(n~ll, data=faults))
Call:
glm.nb(formula = n ~ ll, data = faults, init.theta = 8.667407437,
link = log)
Deviance Residuals:
Min 1Q Media
Hi all,
I am struggling to understand kernel based method. I am trying to understand
two SVM method (CARET::train and e1071::svm()) I will try to put my
question in following points:
1. About e1071::svm(), on what basis the final model is selected when we
use
cross =10 parameter in svm()
Hello,
Here's one way to do it. It assumes dat has character values, not factors.
dat2 <- matrix(0, nrow(dat), ncol(dat))
dat2[ is.na(dat) ] <- NA
dat2[ apply(dat, 2, function(x) grepl(",", x)) ] <- 2
dat2[ apply(dat, 2, function(x) x != ref) ] <- 1
Michael
On 12 October 2010 13:24, burgundy
Hello,
I have an example file which can be generated using:
dat <- read.table(tc <- textConnection(
'T T,G G T
C NA G G
A,T A A NA'), sep="")
I also have a reference file with the same number of rows, for example:
G
C
A
I would like to transform the file to numerical values using the followin
On Jul 12, 2010, at 6:46 PM, David Winsemius wrote:
On Jul 12, 2010, at 6:03 PM, harsh yadav wrote:
Hi,
I have a function in R that compares two very large strings for
about 1
million records.
The strings are very large URLs like:-
http://query.nytimes.com/gst/sitesearch_selector.html
On Jul 12, 2010, at 6:03 PM, harsh yadav wrote:
Hi,
I have a function in R that compares two very large strings for
about 1
million records.
The strings are very large URLs like:-
http://query.nytimes.com/gst/sitesearch_selector.html?query=US+Visa+Laws&type=nyt&x=25&y=8
.
..
or of lar
Hi,
I have a function in R that compares two very large strings for about 1
million records.
The strings are very large URLs like:-
http://query.nytimes.com/gst/sitesearch_selector.html?query=US+Visa+Laws&type=nyt&x=25&y=8.
..
or of larger lengths.
The data-frame looks like:-
id url
1
http:/
> /Nathalie Yauschew-Raguenes wrote:
> /
>> /Hi,
>>
>> I have two series of data set (it's measurment of growth but under
>> two different conditions).
>> To model these data I use the same function which is :
>>
>> formula <- y ~ Asym_inf + Asym_sup * ( (1 / (1 + (n1 * (exp(
>> (tmid1-x) / sca
Nathalie Yauschew-Raguenes wrote:
Hi,
I have two series of data set (it's measurment of growth but under two
different conditions).
To model these data I use the same function which is :
formula <- y ~ Asym_inf + Asym_sup * ( (1 / (1 + (n1 * (exp( (tmid1-x)
/ scal1) )^(1/n1) ) ) ) - (1 / (1
Hi,
I have two series of data set (it's measurment of growth but under two
different conditions).
To model these data I use the same function which is :
formula <- y ~ Asym_inf + Asym_sup * ( (1 / (1 + (n1 * (exp( (tmid1-x)
/ scal1) )^(1/n1) ) ) ) - (1 / (1 + (n2 * (exp( (tmid2-x) / scal2)
you need:
r_squared[[i]]
What is the problem you are trying to solve?
Sent from my iPhone.
On Dec 15, 2009, at 2:29, Tom Pitt wrote:
Hi All,
Can you tell me why I get the error message below? It's driving me
nuts.
Thanks,
Tom
r_squared
[[1]]
[1] 0.9083936
[[2]]
[1] 0.8871647
[[3
wer does not
ensure that a reasonable answer can be extracted from a given body of
data.
~ John Tukey
-Oorspronkelijk bericht-
Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Namens Tom Pitt
Verzonden: dinsdag 15 december 2009 11:30
Aan: r-help@r-project.org
Onder
Hi All,
Can you tell me why I get the error message below? It's driving me nuts.
Thanks,
Tom
> r_squared
[[1]]
[1] 0.9083936
[[2]]
[1] 0.8871647
[[3]]
[1] 0.8193883
[[4]]
[1] 0.728157
[[5]]
[1] 0.8849525
[[6]]
[1] 0.8459416
[[7]]
[1] 0.6702318
[[8]]
[1] 0.02997816
[[9]]
[1] 0.8974268
That is wonderful, now I think I am all set! Thanks again!
Tony Plate wrote:
>
> This is a tricky data entry problem. The right technique will depend on
> the fine details of the data, and it's not clear what those are. E.g.,
> when you say "In my first column, for example, I have "henry" ",
This is a tricky data entry problem. The right technique will depend on the fine details of the
data, and it's not clear what those are. E.g., when you say "In my first column, for example,
I have "henry" ", it's unclear to me whether or not the double quotes are part of
the data or not - whi
On Nov 11, 2009, at 1:02 PM, esterhazy wrote:
Yes, thanks for this, this is exactly what I want to do.
However, I have a remaining problem which is how to get R to
understand that
each entry in my matrix is a vector of names.
I have been trying to import my text file with the names in eac
Yes, thanks for this, this is exactly what I want to do.
However, I have a remaining problem which is how to get R to understand that
each entry in my matrix is a vector of names.
I have been trying to import my text file with the names in each vector of
names enclosed in quotes and separated by
Nice problem!
If I understand you correctly, here's how to do it (with list-based matrices):
set.seed(1)
(x <- matrix(lapply(rpois(10,2)+1, function(k) sample(letters[1:10], size=k)), ncol=2,
dimnames=list(1:5,c("A","B"
A B
1 Character,2 Character,5
2 Character,2 Char
Hi,
I have a matrix with two columns, and the elements of the matrix are
vectors.
So for example, in line 3 of column 1 I have a vector v31=("marc", "robert,
"marie").
What I need to do is to compare all vectors in column 1 and 2, so as to get,
for example setdiff(v31,v32) into a new column.
I
On Tue, 4 Aug 2009, Tom La Bone wrote:
My concern is that the two tests give different DW statistics for the
weighted fit and very different p-values for the same DW statistic for the
unweighted fit. Is there a "right" answer here?
dwtest() is not handling WLS at the moment. I'll have a look w
My concern is that the two tests give different DW statistics for the
weighted fit and very different p-values for the same DW statistic for the
unweighted fit. Is there a "right" answer here?
--
View this message in context:
http://www.nabble.com/Comparison-of-Output-from-%22dwtest%22-and-
I think the statistics are the same, but the p-values are not exactly
the same as they used different methods for the p-value. car uses
bootstrapping and lmtest uses the "pan" algorithm, said from the help
pages.
2009/8/4 Tom La Bone :
>
> Allow me to reword this question. I have performed two fi
Allow me to reword this question. I have performed two fits to the same set
of data below: a weighted fit and an unweighted fit. I performed the
Durbin-Watson tests on each fit using "dwtest" and "durbin.watson". For a
given fit (weighted or unweighted), should both dwtest and durbin.watson be
giv
Should "dwtest" and "durbin.watson" be giving me the same DW statistic and
p-value for these two fits?
library(lmtest)
library(car)
X <- c(4.8509E-1,8.2667E-2,6.4010E-2,5.1188E-2,3.4492E-2,2.1660E-2,
3.2242E-3,1.8285E-3)
Y <- c(2720,1150,1010,790,482,358,78,35)
W <- 1/Y^2
fit <- lm(Y ~
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi All,
I found an interesting thread discussing R and other packages here:
http://anyall.org/blog/2009/02/comparison-of-data-analysis-packages-r-matlab-scipy-excel-sas-spss-stata/
Plenty of well-reasoned comments.
I thought it may be informative
Hi Dylan, Chuck,
Mark Difford wrote:
>> Coming to your question [?] about how to generate the kind of contrasts
>> that Patrick wanted
>> using contrast.Design. Well, it is not that straightforward, though I may
>> have missed
>> something in the documentation to the function. In the past I hav
Hi Dylan, Chuck,
>> contrast(l, a=list(f=levels(d$f)[1:3], x=0), b=list(f=levels(d$f)[4],
>> x=0))
There is a subtlety here that needs to be emphasized. Setting the
interacting variable (x) to zero is reasonable in this case, because the
mean value of rnorm(n) is zero. However, in the real wor
On 2/16/2009 10:18 PM, Dylan Beaudette wrote:
> On Mon, Feb 16, 2009 at 5:28 PM, Patrick Giraudoux
> wrote:
>> Greg Snow a écrit :
>>> One approach is to create your own contrasts matrix:
>>>
>>>
mycmat <- diag(8)
mycmat[ row(mycmat) == col(mycmat) + 1 ] <- -1
mycmati <- solve(mycma
Hi Dylan,
>> Am I trying to use contrast.Design() for something that it was not
>> intended for? ...
I think Prof. Harrell's main point had to do with how interactions are
handled. You can also get the kind of contrasts that Patrick was interested
in via multcomp. If we do this using your artifi
On Mon, Feb 16, 2009 at 5:28 PM, Patrick Giraudoux
wrote:
> Greg Snow a écrit :
>> One approach is to create your own contrasts matrix:
>>
>>
>>> mycmat <- diag(8)
>>> mycmat[ row(mycmat) == col(mycmat) + 1 ] <- -1
>>> mycmati <- solve(mycmat)
>>> contrasts(agefactor) <- mycmati[,-1]
>>>
>>
>> Now
Greg Snow a écrit :
> One approach is to create your own contrasts matrix:
>
>
>> mycmat <- diag(8)
>> mycmat[ row(mycmat) == col(mycmat) + 1 ] <- -1
>> mycmati <- solve(mycmat)
>> contrasts(agefactor) <- mycmati[,-1]
>>
>
> Now when you use agefactor, the intercept will be the first age gr
To: r-h...@stat.math.ethz.ch
> Subject: [R] Comparison of age categories using contrasts
>
> Dear listers,
>
> I would like to compare the levels of a factor with 8 age categories
> (0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,90]
> (however,
> the factor ha
Mark Difford wrote:
Hi Patrick,
The default in glm is cont.treatment (for unordered factors) and that
leads to compare each level to the first one. I would rather prefer to
compare the 2nd to the 1st, the 3rd to the 2nd, the 4th to the 3rd,
etc...
The functions ?C and ?contrasts allow you t
Dear listers,
I would like to compare the levels of a factor with 8 age categories
(0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,90] (however,
the factor has not been ordered yet). The default in glm is
cont.treatment (for unordered factors) and that leads to compare each
level to th
Hi Patrick,
>> The default in glm is cont.treatment (for unordered factors) and that
>> leads to compare each level to the first one. I would rather prefer to
>> compare the 2nd to the 1st, the 3rd to the 2nd, the 4th to the 3rd,
>> etc...
The functions ?C and ?contrasts allow you to set up yo
Dear listers,
I would like to compare the levels of a factor with 8 age categories
(0,10] (10,20] (20,30] (30,40] (40,50] (50,60] (60,70] (70,90] (however,
the factor has not been ordered yet). The default in glm is
cont.treatment (for unordered factors) and that leads to compare each
level t
On Thu, Aug 14, 2008 at 07:46:41PM +1000, Jim Lemon wrote:
> On Wed, 2008-08-13 at 19:14 -0700, Mark Home wrote:
> > Dear All:
> >
> > I have a clinical study where I would like to compare the demographic
> > information for 2 samples in a study. The demographics include both
> > categorical an
On Wed, 2008-08-13 at 19:14 -0700, Mark Home wrote:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and continuous variables. I would like to be able to say whether
> the
Em Qua, 2008-08-13 às 19:14 -0700, Mark Home escreveu:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and continuous variables. I would like to be able to say whether
> t
Use a smaller alpha value rather than 0.05.
C
On Thu, Aug 14, 2008 at 10:14 AM, Mark Home <[EMAIL PROTECTED]> wrote:
> Dear All:
>
> I have a clinical study where I would like to compare the demographic
> information for 2 samples in a study. The demographics include both
> categorical and con
Dear All:
I have a clinical study where I would like to compare the demographic
information for 2 samples in a study. The demographics include both
categorical and continuous variables. I would like to be able to say whether
the demographics are significantly different or not.
The majority o
Kevin J. Thompson wrote:
> hi,
>
> my 0.2$ the rpy python module is excellent, in addition to those Wacek
> mentioned.
>
> another free alternative, particularly for graphics is scilab:
> http://www.scilab.org/
>
>
for scilab there is the rscilab module, so that we are with r agai
une 20, 2008 2:47 pm
Subject: Re: [R] Comparison between R and MATLAB
> for many tasks, gnu octave (a 'matlab clone', no offense to gnu folks
> intended) is quite sufficient, and *free* (+ open source). you
> can run
> some of matlab code in octave.
>
> y
for many tasks, gnu octave (a 'matlab clone', no offense to gnu folks
intended) is quite sufficient, and *free* (+ open source). you can run
some of matlab code in octave.
you should also check if sage (http://www.sagemath.org/) can do the job
for you, it's stuffed with all sorts of maths utiliti
The easy way around that is to create an account, "Mathworks", with a
common group that all who will use MatLab belong, then su - Mathworks
should satisfy the license manager.
Clint BowmanINTERNET: [EMAIL PROTECTED]
Air Dispersion Modeler INTERNET: [EMAIL P
"Shubha Vishwanath Karanth" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Can I get a comparison between R and MATLAB? How is R efficient than MATLAB?
Or what are the weaknesses of R compared to MATLAB?
Don't forget to compare licenses and cost. Matlab's rigid and unreasonable
l
The short answer on comparing R and Matlab is that it depends on
your benchmark and which church you happen to frequent. Some people
swear that Matlab is superior, but I haven't seen the evidence for
that. The benchmarks that come closer to being transparent are more
equivocal, as far as
This discussion has already occurred- to my knowledge at least once. I
would suggest searching the archived list, and see what you get. If you
still have questions after you have a look then fire a couple of specifics,
but speaking for myself I don't know anything about matlab only S (and even
t
Hi R,
Can I get a comparison between R and MATLAB? How is R efficient than MATLAB? Or
what are the weaknesses of R compared to MATLAB?
Thank you very much for your help,
Shubha
Shubha Karanth | Amba Research
Ph +91 80 3980 8031 | Mob +91 94 4886 4510
Bangalore * Colombo * London *
Sorry if cross-posting
Hi all,
I would like to make a 2-by-2 comparison of intercepts and slopes from
linear regression models.
Can you advise me on that?
All the best,
Diogo André Alagador
[[alternative HTML version deleted]]
__
R-h
Hi all,
I would like to make a 2-by-2 comparison of intercepts and slopes from
linear regression models.
Can you advise me on that?
All the best,
Diogo André Alagador
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing li
ui's suggestion and use a relational database system to handle the huge
> data.
>
>
>
> > Date: Sat, 26 Jan 2008 20:40:51 -0500
>
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]
> > Subject: Re: [R] Comparison of aggregate in R and group by in mys
40:51 -0500
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [R] Comparison of aggregate in R and group by in mysql
> CC: [EMAIL PROTECTED]
>
> I think with your data you will be computing a matrix that is 7049 x
> 11704. This will require about 700MB of memor
an))
> (I killed it after 30 minutes)
>
>
>
> > Date: Sat, 26 Jan 2008 19:55:51 -0500
> > From: [EMAIL PROTECTED]
> > To: [EMAIL PROTECTED]
> > Subject: Re: [R] Comparison of aggregate in R and group by in mysql
> > CC: [EMAIL PROTECTED]
>
> >
> >
55:51 -0500
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [R] Comparison of aggregate in R and group by in mysql
> CC: [EMAIL PROTECTED]
>
> How large is your dataframe? How much memory do you have on your
> system? Are you paging? Here is a test I ran wit
How large is your dataframe? How much memory do you have on your
system? Are you paging? Here is a test I ran with a data frame with
1,000,000 entries and it seems to be fast:
> n <- 100
> x <- data.frame(A=sample(LETTERS,n,TRUE), B=sample(letters[1:4],n,TRUE),
+ C=sample(LETTERS[1:4],
How does the it compare if you read it into R and then do your
aggregate with sqldf:
library(sqldf)
# example using builtin data set CO2
CO2agg <- sqldf("select Plant, Type, Treatment, avg(conc) from CO2
group by Plant, Type, Treatment")
# or using your data:
Xagg <- sqldf("select Group, Age, T
huali,
if i were you, i will create a view on the MySql server to aggregate
the data first and then use R to pull the data through this created
view. This is not only applicable to R but also a general guideline in
similar situation.
Per my understanding and experience, R is able to do data manipul
Hi, netters,
First of all, thanks a lot for all the prompt replies to my earlier question
about "merging" data frames in R.
Actually that's an equivalence to the "join" clause in mysql.
Now I have another question. Suppose I have a data frame X with lots of
columns/variables:
Name, Age,Group,
Daniel Stepputtis wrote:
>
> Dear Ben,
> I was searching for the same problem. Thank you very much, it helped me a
> lot and I will use it quite often!
>
> In addition to the problem given by tintin_et_milou. I have to compare a
> two pairs of vectors.
>
> I.e. I have two datasets each with l
Dear Ben,
I was searching for the same problem. Thank you very much, it helped me a lot
and I will use it quite often!
In addition to the problem given by tintin_et_milou. I have to compare a two
pairs of vectors.
I.e. I have two datasets each with latitude and longitude (which defines the
geo
tintin_et_milou wrote:
>
> Thanks for your help, but there is some more problem. The two vectors have
> not the same length so there is a problem with cbind. I give you an
> example. My first vector is
>
> >g[g[,1]>2035 & g[,1]<2050,]
>
> M.Z Intensity
> 2035.836 652.9494
> 20
Thanks for your help, but there is some more problem. The two vectors have
not the same length so there is a problem with cbind. I give you an example.
My first vector is
>g[g[,1]>2035 & g[,1]<2050,]
M.Z Intensity
2035.836 652.9494
2035.939 664.5841
2036.043 696.0554
2036.14
tintin_et_milou wrote:
>
> Hello,
>
> I have a vector of two columns like this:
> m/ZI
> 1000.235 125
> 1000.356 126.5
>
> ...
>
> and a second vector with only one column:
> m/Z
> 995.547
> 1000.320
> ...
>
> For each value of the second vector I want to as
Dear r-help mailing list,
thanks for the advices on the last question,through them I've solved it.
But now, I would know, how is possible to compare two(or more) cluster
dendrograms, because for each cluster dendrogram coming from different data
frame the height is measured with different uni
100 matches
Mail list logo