Hi Steve,
Could you please tell me what I should change in this equation: accuracy <-
sum(mytest == mytestdata[,1]) / length(mytest)In order to compute the accuracy
of SVM when using cross validation?Cheers,Amy
_
hussain abu-saaq wrote:
>
> ...
> y = fminsearch('pbond',.15,options,p,c,nofcup,delta/epsilon);
>
Check documentation on "optim". There are several method available, SANN
probably is the most robust if you really need gradient-free methods such as
in fminsearch.
Dieter
--
View this messag
I'm just beginning R, with book Using R for Introductory Statistics, and one
of the early questions has me baffled. The question is, create the
sequence: 1,2,3,4,5,4,3,2,1 using seq() and rep().
Now, as a programmer, I am punching myself to not be able to figure it out..
I mean, as simple as a f
thanks!! was stuck in python brain.
Simon Blomberg wrote:
substr(x,1,3)
On Thu, 2010-03-04 at 21:32 -0800, Nick Matzke wrote:
Hi,
This has got to be easy, but for some reason the simplest things can be
the hardest to find help on in R.
How the heck do I subset a string?
e.g.,
x = "abcdef
substr(x,1,3)
On Thu, 2010-03-04 at 21:32 -0800, Nick Matzke wrote:
> Hi,
>
> This has got to be easy, but for some reason the simplest things can be
> the hardest to find help on in R.
>
> How the heck do I subset a string?
>
> e.g.,
>
> x = "abcdef"
>
> I just want the first 3 characters.
Hi,
This has got to be easy, but for some reason the simplest things can be
the hardest to find help on in R.
How the heck do I subset a string?
e.g.,
x = "abcdef"
I just want the first 3 characters.
Cheers!
Nick
--
Nicholas J. Matzke
On 04/03/2010 11:40 PM, David Winsemius wrote:
On Mar 4, 2010, at 10:58 PM, Duncan Murdoch wrote:
On 04/03/2010 10:32 PM, David Winsemius wrote:
On Mar 4, 2010, at 9:47 PM, jonas garcia wrote:
When I opened the file with a hex-editor, the problematic
character turned out to be “1a”
I am att
On Mar 4, 2010, at 10:59 PM, Juliet Ndukum wrote:
The data set consists of two sets of matrices, as labelled by the
columns, T's and C's.
xy
xT1T2T3T4T5C1C2C3C4C5
[1,] 50 0.00 0.00 33.75 0.00 0.00 0.00 36.76 0.00 35.26 0.00
[2,] 13 34.41
Hi:
The function below counts the number of positive-valued T's and C's and
tests whether each count is at least as large as the test value n:
f <- function(x, n) (sum(x[grepl("^T", names(x))] > 0) >= n) &
(sum(x[grepl("^C", names(x))] > 0) >= n)
Apply it to xy, adding the v
On Thu, Mar 4, 2010 at 4:42 PM, Seth W Bigelow wrote:
> I wish to create a multipanel plot (map) from several datasets ("d" and
> "q" in the example below). I can condition the main xyplot statement on
> the "site" variable, but I don't know how to pass a conditioning variable
> to panel.xyplot pl
On Mar 4, 2010, at 10:58 PM, Duncan Murdoch wrote:
On 04/03/2010 10:32 PM, David Winsemius wrote:
On Mar 4, 2010, at 9:47 PM, jonas garcia wrote:
When I opened the file with a hex-editor, the problematic
character turned out to be “1a”
I am attaching a sample DAT file with 3 lines (the seco
Dear users,
I am trying to show the equation (including coefficients from the model
estimates) for a gam model but do not understand how to.
Slide 7 from one of the authors presentations (gam-theory.pdf URL:
http://people.bath.ac.uk/sw283/mgcv/) shows a general equation
log{E(yi )} = α+ ßxi + f
Hi,
I have many subgraphs, I want to calculate the degree of vertices in each
subgraph and hold them in a list to be able to compare with the values in
another list.
I tried this code:
y <- lapply(0:4, function(i) paste(sgr, i, sep="") )
here sgr1 for example is a graph object, I w
When I opened the file with a hex-editor, the problematic character turned
out to be “1a”
I am attaching a sample DAT file with 3 lines (the second line is the one
with the undesirable character).
The furthest I could get was through readBin:
> tmp<- readBin("new.dat", what = "raw", n=1
Thanks for the reply, Stephan. I don't want to use R to predict the future
value. I am looking to write the logic in a programming language like Java
to predict future values using the model coefficients generated by R. For
this, I would like to know what formula to use to estimate the value at an
The data set consists of two sets of matrices, as labelled by the columns, T's
and C's.
> xy
xT1T2T3T4T5C1C2C3C4C5
[1,] 50 0.00 0.00 33.75 0.00 0.00 0.00 36.76 0.00 35.26 0.00
[2,] 13 34.41 0.00 0.00 36.64 32.86 34.11 35.80 37.74 0.00 0
On 04/03/2010 10:32 PM, David Winsemius wrote:
On Mar 4, 2010, at 9:47 PM, jonas garcia wrote:
When I opened the file with a hex-editor, the problematic character
turned out to be “1a”
I am attaching a sample DAT file with 3 lines (the second line is
the one with the undesirable character).
On Mar 4, 2010, at 9:47 PM, jonas garcia wrote:
When I opened the file with a hex-editor, the problematic character
turned out to be “1a”
I am attaching a sample DAT file with 3 lines (the second line is
the one with the undesirable character).
The furthest I could get was through readBin
This should work for you:
input <- file('/recv/new.dat', 'rb')
output <- file('/recv/newV2.dat', 'wb')
repeat {
x <- readBin(input, what='raw', n=1)
if (length(x) == 0) break
x[which(x == as.raw(0x1a))] <- charToRaw(' ')
writeBin(x, output)
}
close(input)
close(output)
On Th
Is it possible to run a R script from Java (via JRI (part of rJava):
http://www.rforge.net/rJava/) without adding it line by line into a
JRI java application?
Ralf
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE
You can create a right-mouse menu command to run an R program as follows
(although the details may be different for different versions of Windows).
In Windows Explorer:
1. Tools / Folder Options / File Types
2. find the extension for R files and push Advanced
3. add a new action called "Run" w
If it does finish, it will take some time. And what for?
If all you want is a plot to look at, why are you using all 33 million
observations? Chances are that a sample of, say, 1 will get you about as
good as a plot of an ecdf would do. Have you tried
plot.ecdf(c(range(myDataVector), sam
as.Date('17/02/2005','%d/%m/%Y')
[1] "2005-02-17"
(Read the documentation more carefully to distinguish between %y and
%Y; I guess you tried lots of combinations but never tried the
correct one, so just be more careful at matching what your data is
with the format string you create.)
-D
Try this:
Lines <- "Date,Time,Camera,Volume
57,2009-10-09,5:00:00 PM,MANBRIN_RIVER_NB,210
58,2009-10-09,5:10:00 PM,MANBRIN_RIVER_NB,207
59,2009-10-09,5:20:00 PM,MANBRIN_RIVER_NB,250
60,2009-10-09,5:30:00 PM,MANBRIN_RIVER_NB,193
61,2009-10-09,5:40:00 PM,MANBRIN_RIVER_NB,205
62,2009-10-09,6:00:00 P
I wish to create a multipanel plot (map) from several datasets ("d" and
"q" in the example below). I can condition the main xyplot statement on
the "site" variable, but I don't know how to pass a conditioning variable
to panel.xyplot plot so that the x-y coordinates from dataset q are only
plot
My foolish move for this week: I'm going to go way out on a limb and
guess what the OP wanted was something like this.
i=1, foo = x*exp(-x)
i=2, foo= x^2*exp(-x)
i=3, foo = x^3*exp(-x)
.
.
.
In which case he really should create a vector bar<-rep(na,5) ,
and then inside the loop,
bar[i]<-x^i
Dear R-help:
I am trying to plot the cumulative distribution function of a
vector of around 33 million numeric observations.
> plot.ecdf(myDataVector)
R has been non-responsive for about an hour, and my guess is that it's
probably not going to finish.
Does anybody have a sense whether this
Tena koe Matt
I tend to create a .bat file with one line:
R\R-Current\bin\R CMD BATCH yourScript.R
where you replace R\R-Current\bin\R with the path to your R, and the
.bat file is in the same folder as yourScript.R.
There may be better ways, and doubtless someone will enlighten us both
if ther
On 03/05/2010 04:11 AM, Bert Gunter wrote:
Folks:
Rolf's (appropriate, in my view) response below seems symptomatic of an
increasing tendency of posters to hide their identities with pseudonyms and
fake headers. While some of this may be due to identity paranoia (which I
think is overblown for t
a quick google of "fminsearch in R"
resulted in this
http://www.google.com/search?q=fminsearch+in+R&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
take a look. there appears to be a function called optim that you can look
at
http://sekhon.berkeley.edu/stats/html/optim.htm
Hi,
I need to be able to run an R script by double-clicking the file name in
Windows. I've tried associating the .r extension with the different R
.exe's in /bin but none seems to work. Some open R then close right
away, and Rgui.exe gives the message ARGUMENT "/my/file.r" __ignored__
before
On 03/04/2010 10:30 PM, mahalakshmi sivamani wrote:
Hi all ,
I have one query.
i have list of some .cel files. in my program i have to mention the path of
these .cel files
part of my program is,
rna.data<-exprs(justRMA(filenames=file.names, celfile.path=*datadir*,
sampleNames=sample.names, p
On Mar 4, 2010, at 4:45 PM, Kindra Martinenko wrote:
I posted a similar question, but feel it needs a bit more elaboration.
I have a data frame (read from a csv file) that may have missing
rows. Each
day has 7 time intervals associated with it, with a range from 17:00
hrs to
18:00 hrs in
I posted a similar question, but feel it needs a bit more elaboration.
I have a data frame (read from a csv file) that may have missing rows. Each
day has 7 time intervals associated with it, with a range from 17:00 hrs to
18:00 hrs in 10 minute bins.
What I am looking for is a script that will
Hi,
the help page for arima() suggests looking at predict.Arima(), so take a
look at ?predict.Arima(). You will probably not use the coefficients,
but just feed it the output from arima(). And take a look at
auto.arima() in the forecast package.
HTH
Stephan
testuser schrieb:
I would like
Hello,
It is possible to use 'deriv' when the expression itself is dynamic? I have
data where the conditional mean is time-varying (like a GARCH(1,1) model) as
mu_{t} = omega1 + alpha1*N_{t-1} + beta1*mu_{t-1}.
The parameter vector is c("omega1", "alpha1", "beta1") and N_t is the
observatio
I need to update posterior dist function upon the coming results and
find the posterior mean each time.
On Mar 4, 1:31 pm, jim holtman wrote:
> What exactly are you trying to do? 'foo' calls 'foo' calls 'foo'
> How did you expect it to stop the recursive calls?
>
>
>
>
>
> On Thu, Mar 4, 2
I would like to know how to use the coefficients generated by the ARIMA model
to predict future values. What formula should be used with the coeffcients
to determine the future values.
Thanks
--
View this message in context:
http://n4.nabble.com/Equation-for-model-generated-by-auto-arima-tp1578
Have you considered reading the file in a binary/raw, finding the
offending character and replacing it with a blank (or whatever and
then writing the file back out). You can then probably process it
using read.table.;
On Thu, Mar 4, 2010 at 12:50 PM, jonas garcia
wrote:
> Thank you so much for y
I would help, but I don't know matlab.
Stephen
On Thu, Mar 4, 2010 at 2:50 PM, hussain abu-saaq wrote:
>
> How Can I write this this matlab code in R:
>
>
> options=optimset('TolFun',1e-9,'TolX',1e-9,'MaxIter',1e8,'MaxFunEvals',1e8);
> c=c/2;
> [alpha, delta, epsilon, nofcup] = ustrs(set_date,ma
How Can I write this this matlab code in R:
options=optimset('TolFun',1e-9,'TolX',1e-9,'MaxIter',1e8,'MaxFunEvals',1e8);
c=c/2;
[alpha, delta, epsilon, nofcup] = ustrs(set_date,mat_date);
y = fminsearch('pbond',.15,options,p,c,nofcup,delta/epsilon);
y = 200*y;
Note
pbond is a function in Mat
On 04-Mar-10 19:27:16, Bernardo Rangel Tura wrote:
> On Thu, 2010-03-04 at 11:15 -0500, Jacob Wegelin wrote:
>> The purpose of this email is to
>>
>> (1) report an example where fisher.test returns p > 1
>> (2) ask if there is a reliable way to avoid p>1 with fisher.test.
>>
>> If one has designe
Well, the HeadSlap package would of course require the esp package so that it
could tell the difference between someone doing something clever and someone
doing something because "everyone else does".
For example, user 1 calls the pie function, HeadSlap using esp finds out that
user 1 will also
Here is another approach:
x <- rnorm(100)
tmp <- hist(x, plot=FALSE)
plot(tmp, col='blue')
tu <- par('usr')
par(xpd=FALSE)
clip( tu[1], mean(x) - sd(x), tu[3], tu[4] )
plot(tmp, col='red', add=TRUE)
clip( mean(x) + sd(x), tu[2], tu[3], tu[4] )
plot(tmp, col='red', add=TRUE)
--
Gregory (Greg)
On 04-Mar-10 13:35:42, Duncan Murdoch wrote:
> On 04/03/2010 7:35 AM, (Ted Harding) wrote:
>> On 04-Mar-10 10:50:56, Petr PIKAL wrote:
>> > Hi
>> >
>> > r-help-boun...@r-project.org napsal dne 04.03.2010 10:36:43:
>> >> Hi R Gurus,
>> >>
>> >> I am trying to figure out what is going on here.
>> >
On 04.03.2010 20:08, Seeker wrote:
Here is the test code.
foo<-function(x) exp(-x)
for (i in 1:5)
{
foo<-function(x) foo(x)*x
foo(2)
Hmmm, wenn do you think does the evaluation stop? Your recursion has an
infinity depth.
If you cannot get the recursion right (and even if you can): Try to g
What exactly are you trying to do? 'foo' calls 'foo' calls 'foo'
How did you expect it to stop the recursive calls?
On Thu, Mar 4, 2010 at 2:08 PM, Seeker wrote:
> Here is the test code.
>
> foo<-function(x) exp(-x)
> for (i in 1:5)
> {
> foo<-function(x) foo(x)*x
> foo(2)
> }
>
> The erro
On Thu, 2010-03-04 at 11:15 -0500, Jacob Wegelin wrote:
> The purpose of this email is to
>
> (1) report an example where fisher.test returns p > 1
>
> (2) ask if there is a reliable way to avoid p>1 with fisher.test.
>
> If one has designed one's code to return an error when it finds a
> "nons
Corey,
Thanks for the quick reply.
I cant give any sample code as I don't know how to code this in R.
That's why I tried to pass along some pseudo code.
I'm looking for the best "beta" that maximize likelihood over all the
groups. So, while your suggestion is close, it isn't quite what I need.
Here is the test code.
foo<-function(x) exp(-x)
for (i in 1:5)
{
foo<-function(x) foo(x)*x
foo(2)
}
The error is "evalution nested too deeply". I tried Recall() but it
didn't work either. Thanks a lot for your input.
__
R-help@r-project.org mailing lis
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf Of sjaffe
> Sent: Thursday, March 04, 2010 10:59 AM
> To: r-help@r-project.org
> Subject: Re: [R] counting the number of ones in a vector
>
>
> I got tired of writing length(which()
Try this:
D <- as.numeric(gsub("[[:punct:]]", "", D))
On Thu, Mar 4, 2010 at 2:12 PM, LCOG1 wrote:
>
> Basic question, looked through the forum and documentation but didnt see a
> solution.
>
> So consider
>
> O<-c(1:20)
> D<-c("1:","2:","3:","4:","5:","6:","7:","8:","9:","10:","11:","12:","13:"
On Mar 4, 2010, at 12:12 PM, LCOG1 wrote:
Basic question, looked through the forum and documentation but didnt
see a
solution.
So consider
O<-c(1:20)
D<-
c
("1
:","2
:","3
:","4
:","5:","6:","7:","8:","9:","10:","11:","12:","13:","14:","15:","16:",
"17:","18:","19:","20:")
Time<-c(
Is there an easy way to do two things:
I have a dataframe with headers and 18 columns
I want to plot all the columns on same y axis plot(df) does this.
BUT
1. There is no legend, the legend function seems pedantic - surely there
must be an easy way to just pick up my headers?
2. How do I vary
Basic question, looked through the forum and documentation but didnt see a
solution.
So consider
O<-c(1:20)
D<-c("1:","2:","3:","4:","5:","6:","7:","8:","9:","10:","11:","12:","13:","14:","15:","16:",
"17:","18:","19:","20:")
Time<-c(51:70)
AveTT<-data.frame(O,D,Time)
I would like to remove
Thank you so much for your reply.
I can identify the characters very easily in a couple of files. The reason I
am worried is that I have thousands of files to read in. The files were
produced in a very old MS-DOS software that records information on
oceanographic data and geographic position dur
Hi there,
I need a MLE of a covariance matrix under the constraint that particular
elements of the inverse covariance matrix are zero. I can't find any
function/package that'd do that. Any suggestions?
Jason
[[alternative HTML version deleted]]
__
I got tired of writing length(which()) so I define a useful function which I
source in my .Rprofile:
count <- function( x ) length(which(x))
Then:
count( x == 1 )
--
View this message in context:
http://n4.nabble.com/counting-the-number-of-ones-in-a-vector-tp1570700p1578549.html
Sent from t
OK, I got it figured out. I was not keying into a length greater than 1, so:
# I added this object and placed it into the iftelse statement:
lid <- sum(match(id, st[i], nomatch = 0))
out$var.g[i]<-ifelse(lid ==1, meta$var.g[id==st[i]],
aggs(g=g[id==st[i]],
If I remember correctly, the order in which you specify x, y, and z matters for
wireframe, so you may want to try rotating by x first, then z.
You may also find the rotate.wireframe function in the TeachingDemos package
(make sure that you have also loaded the tcltk package) useful in finding th
Patrick,
1. Implicit intercepts. Implicit intercepts are not too bad for the main
model, but they creep in occasionally in strange places where they might not
be expected. For example, in some of the variance structures specified in
lme, (~x) automatically expands to (~1+x). Venables said in th
Hi R users,
Does anyone know if Lo's modified R/S statistic is implemented in R?
Thank you very much,
Alexandra Almeida
--
Alexandra R M de Almeida
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.c
On Mar 4, 2010, at 12:50 PM, jonas garcia wrote:
> Thank you so much for your reply.
>
> I can identify the characters very easily in a couple of files. The
> reason I am worried is that I have thousands of files to read in.
> The files were produced in a very old MS-DOS software that records
Ben Bolker wrote :
The dispersion parameter depends on the Pearson residuals,
not the deviance residuals (i.e., scaled by expected variance).
I haven't checked into this in great detail, but the Pearson
residual of your first data set is huge, probably because
the fitted value is tiny (and hence
> Jacob Wegelin
> on Thu, 4 Mar 2010 11:15:51 -0500 (EST) writes:
> The purpose of this email is to
> (1) report an example where fisher.test returns p > 1
> (2) ask if there is a reliable way to avoid p>1 with
> fisher.test.
> If one has designed one's code to
Hi R-users,
I have a question related to permutations in R.
I started learning something about the permutation/randomization tests using
Edgington and Onghena (2007) and i tried replicating some of the examples in
the book (i have seen also the packages from R that concern permutation ), but
at
Yes, David is absolutely right - that's it!
On Thu, Mar 4, 2010 at 12:21 PM, Dimitri Liakhovitski wrote:
> Yes, I was indeed asking about stepwise procedures!
>
> On Thu, Mar 4, 2010 at 11:29 AM, David Winsemius
> wrote:
>>
>> On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:
>>
>>> I am
Yes, I was indeed asking about stepwise procedures!
On Thu, Mar 4, 2010 at 11:29 AM, David Winsemius wrote:
>
> On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:
>
>> I am not sure if this question has been asked before - but is there a
>> procedure in R (in lm or glm?) that is equivalent
Folks:
Rolf's (appropriate, in my view) response below seems symptomatic of an
increasing tendency of posters to hide their identities with pseudonyms and
fake headers. While some of this may be due to identity paranoia (which I
think is overblown for this list), I suspect that a good chunk of it
On Mar 4, 2010, at 5:47 AM, Huyen Quan wrote:
Dear sir/madam
My name is Quan, I am a PhD student in Korea. my major is
Hydrological in Water Resources Engineering. I am interested in
Extremes Toolkit model and I known you from information in internet.
I installed successfully this model bu
On Mar 4, 2010, at 10:47 AM, Dimitri Liakhovitski wrote:
I am not sure if this question has been asked before - but is there a
procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
regression commands in SPSS?
Thanks a lot!
I haven't used SPSS for 25 years (excluding a brief s
Oops, forgot the example data used in previous post:
set.seed(133)
Dat <- data.frame(y=rnorm(10),
x1=rnorm(10),
x2=rnorm(10),
z1=rnorm(10),
z2=rnorm(10),
grp = factor(c(rep("a", 3), rep("b", 4), rep("c", 3)))
Hi Michael,
I don't think Dimitry was asking for stepwise procedures, but rather
how to add sets of variables to a model, for example to see if set B
predicts over and above some standard set A.
-Ista
On Thu, Mar 4, 2010 at 11:14 AM, Michael Conklin
wrote:
> I bet you stirred the pot here becaus
Hi Dimitri,
It works a bit differently:
## The SPSS way:
compute dum1 = 0.
compute dum2 = 0.
if(grp = "b") dum1 = 1.
if(grp = "c") dum2 = 1.
exe.
regression
/var = y x1 x2 z1 z2 grp
/des = def
/sta = def zpp cha tol f
/dep = y
/met = enter x1 x2
/met = enter z1 z2
/met = enter dum1
The purpose of this email is to
(1) report an example where fisher.test returns p > 1
(2) ask if there is a reliable way to avoid p>1 with fisher.test.
If one has designed one's code to return an error when it finds a "nonsensical"
probability, of course a value of p>1 can cause havoc.
Examp
I bet you stirred the pot here because you arre asking about stepwise
procedures. Look at step, or stepAIC in the MASS library.
\Mike
On Thu, 4 Mar 2010 07:47:34 -0800
Dimitri Liakhovitski wrote:
> I am not sure if this question has been asked before - but is there a
> procedure in R (in lm
Hi All,
I have a character data.frame that contains character columns and date
columns. I've manage to convert some of my character columns to a date
format using as.Date(x, format="%m/%d/%y").
An example of one of my dates is
PROCHIDtDeath icdcucd date_admission1 date_admission_2
CAO00
Hi Matthew,
Sorry that I've read your email a bit too late and answered to the
thread before.
You're right about Crantastic, it's a really good thing, and I actually
voted for the packages I really use (even though a little). It was
suggested to me by a r-helper some time ago.
I was also sur
I have partialy figured out the problem "condition" might be some intrenal
MySQL function/variable and can't be use as column name directly.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Vladimir Morozov
Sent: Thursday, March
Thank you Dimitris!
I have 3D arrays of the same dimensions, so Reduce worked...
Best,
Eleni
On Thu, Mar 4, 2010 at 5:13 PM, Dimitris Rizopoulos <
d.rizopou...@erasmusmc.nl> wrote:
> do these lists contain 3D arrays of the same dimensions? If yes, then you
> could use
>
> Reduce("+", pred.svm[
Hello,
Since I initiated this discussion some days ago, I discovered a paper that
may be of interest:
ANOVA for unbalanced data: Use Type II instead of Type III sums
of squares
by ØYVIND LANGSRUD
Statistics and Computing 13: 163–167, 2003
Ravi
I am not sure if this question has been asked before - but is there a
procedure in R (in lm or glm?) that is equivalent to ENTER and REMOVE
regression commands in SPSS?
Thanks a lot!
--
Dimitri Liakhovitski
Ninah.com
dimitri.liakhovit...@ninah.com
__
R
Hi all,
1) I mostly use the base packages. But if I should decide for three
others, it would be
- plyr: I've just started to use it in some specific cases, but it seems
really powerful and practical
- doBy is also quite good but I use only one function from it
(summaryBy). For now I have what
Hi All,
I am using a specialized aggregation function to reduce a dataset with
multiple rows per id down to 1 row per id. My function work perfect when
there are >1 id but alters the 'var.g' in undesirable ways when this
condition is not met, Therefore, I have been trying ifthen() statements to
ke
Hi,
Can somebody advice on weird mysqlWriteTable bug.
> mysqlWriteTable(conn, 'comparison',design2, row.names = F, overwrite=T)
Error in mysqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not run statement: You have an error in your SQL syntax;
check the manual that corresponds t
do these lists contain 3D arrays of the same dimensions? If yes, then
you could use
Reduce("+", pred.svm[[i]])[1,2,5]
otherwise a for-loop will also be clear and efficient, e.g.,
W <- pred.svm[[i]][[1]][1,2,5]
for (j in 2:20) {
W <- W + pred.svm[[i]][[j]][1,2,5]
}
I hope it helps.
Best
On Thu, Mar 4, 2010 at 9:03 AM, S Ellison wrote:
>
>
"John Fox" 02/03/2010 02:19 >>>
>>There's also a serious question about whether one would
>>be interested in main effects defined as averages over the level of
> the
>>other factor when interactions are present.
>
> My personal take on thi
I have seen literature on using a combination of the 'strucchange' and
'segmented' packages to find and fit piecewise linear models. I have been
trying to apply the same methods to quantile regression from the quantreg
package, but am having issues using a "rq" object where the function
assumes
Dear list,
I have some difficulty in manipulating list elements. More specifically, I
am performing svm regression and have a list of lists, called pred.svm. The
elements of the second list are 3D arrays. Thus I have pred.svm[[i]][[j]],
with 1<=i<=5 and 1<=j<=20.
I want to take the sum of the elem
ok this is simpler thank for helping guys :)
-
Anna Lippel
--
View this message in context:
http://n4.nabble.com/filtering-signals-per-day-tp1577044p1578176.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org ma
Dear Mahalakshmi,
the simplest way to do this and to avoid your error is to open R
directtly for the folder in which there are your .CEL files and then:
data<-ReadAffy() #read all the .CEL files in the working director
data.rma<-rma(data) #in order to use rma on your .CEL files
hope helps
2010/
I am sorry I still don't understand.
In the example you give the 'assign' vecotr is a vector of length 6 and there
are indeed 6 columns in the data frame. But the formula only has two variables
namely 'Month' and 'Wind'. Where do the values if the 'assign' vector come
from? I see '0 1 1 1 1 2'
Model-based clustering, e.g. using package mclust will do what you
want: it uses normal densities to calculate similarities of objects to
clusters, which is a monotone transformation of Mahalanobis distance
(basically what's inside the exp() of the multivariate Gaussian
density).
If you believe
Dear sir/madam
My name is Quan, I am a PhD student in Korea. my major is Hydrological in Water
Resources Engineering. I am interested in Extremes Toolkit model and I known
you from information in internet.
I installed successfully this model but I didn't know how to make the type of
files in
Hi all ,
I have one query.
i have list of some .cel files. in my program i have to mention the path of
these .cel files
part of my program is,
rna.data<-exprs(justRMA(filenames=file.names, celfile.path=*datadir*,
sampleNames=sample.names, phenoData=pheno.data,
cdfname=cleancdfname(hg18_Affymet
For the records: this was a bug in e1071 (which is already fixed & new
version on CRAN).
Best
David
Steve Lianoglou wrote:
Hi,
On Wed, Mar 3, 2010 at 4:08 AM, Häring, Tim (LWF)
wrote:
(...)
While you're sending your bug report to David, perhaps you can try the
SVM from kernlab.
It relies
Hi Carlos,
Take a look at ?cut, ?ifelse and ?transform for some ideas. Also, the
function recode in car might help.
HTH,
Jorge
On Thu, Mar 4, 2010 at 7:35 AM, Carlos Guerra <> wrote:
> Dear all,
>
> I have a table like this:
>
> > a <- read.csv("test.csv", header = TRUE, sep = ";")
> > a
>
>
Try this:
a$pUrb_class <- cut(a$pUrb, c(-Inf, 20, 40, 60, Inf), labels = 1:4)
On Thu, Mar 4, 2010 at 11:11 AM, Carlos Guerra
wrote:
> Dear all,
>
> I have a table like this:
>
> a <- read.csv("test.csv", header = TRUE, sep = ";")
> a
>
> UTM pUrb pUrb_class
> 1
On Mar 4, 2010, at 7:41 AM, Ashta wrote:
In a histogram , is it possible to have different colors?
Example. I generated
x <- rnorm(100)
hist(x)
I want the histogram to have different colors based on the following
condition
mean(x)+sd(x) with red color and mean(x) - sd(x) with red c
Dear all,
I have a table like this:
a <- read.csv("test.csv", header = TRUE, sep = ";")
a
UTM pUrbpUrb_class
1 NF188520,160307 NA
2 NF188651,965649 NA
3 NF189326,009581 NA
4 NF1894 3,1414
1 - 100 of 128 matches
Mail list logo