Thanks for your help Duncan and Peter! I ended up using a combination of
your suggestions. I used Duncan's y.limits ration of two plots with
differing types (percent and density), to provide me with a scale variable.
I then used this in Peter's dnorm_scaled function and called it using
panel.mat
The following report by the authors of the randomForest package describes two
different algorithm modifications for using random forests to learn classifiers
for "unbalanced" learning problems in which one class is much less frequent
than the other (in 2-class problems). These two variations a
On Wed, May 7, 2014 at 2:21 AM, David R Forrest wrote:
> It sounds as if your underlying MySQL database is too slow for your purposes.
> Whatever you layer on top of it will be constrained by the underlying
> database. To speed up the process significantly, you may need to do work on
> the da
On May 6, 2014, at 2:51 PM, Tom Walker wrote:
> Hi,
>
> I need to generate bar charts where the x-axis is a factor that
> includes a mixture of species names (in italic) and control treatments
> (in plain text).
>
> I would like this to be represented in the contents of the axis
> labels, meani
Hi,
I need to generate bar charts where the x-axis is a factor that
includes a mixture of species names (in italic) and control treatments
(in plain text).
I would like this to be represented in the contents of the axis
labels, meaning that I need the x-axis to include both italic and
plain text.
Ashis Deb gmail.com> writes:
>
> Hi all I had made a package in R-3.0.3 , and its running well ,
> my issue is it is not running in other versions or R like
> R-3.0.2/3.0.1 it is showing error like ---
>
> Error: This is R 3.0.2, package âxxxâ needs >= 3.0.3
>
> Does
Niloofar.Javanrouh yahoo.com> writes:
>
>
> hello,
> i want to differentiate of L with respect to b
> when:
>
> L= k*ln (k/(k+mu)) + sum(y) * ln (1-(k/mu+k))
>#(negative binomial ln likelihood)
> and
> ln(mu/(mu+k)) = a+bx #link function
>
> how can i do it in R?
> thank you.
>
On 06/05/2014 2:09 PM, Göran Broström wrote:
A thread on r-devel ("Historical NA question") went (finally) off-topic,
heading towards "Precedence". This triggered a question that I think is
better put on this list:
I have been more or less regularly been writing programs since the
seventies (For
On 06-May-2014 18:09:12 Göran Broström wrote:
> A thread on r-devel ("Historical NA question") went (finally) off-topic,
> heading towards "Precedence". This triggered a question that I think is
> better put on this list:
>
> I have been more or less regularly been writing programs since the
>
Does
values(r) <- as.factor(1:ncell(r))
do what you want?
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
B
A thread on r-devel ("Historical NA question") went (finally) off-topic,
heading towards "Precedence". This triggered a question that I think is
better put on this list:
I have been more or less regularly been writing programs since the
seventies (Fortran, later C) and I early got the habit of
Hello together,
I was wondering how I can solve the following conversion problem of a
raster file: when I try to convert the values from the raster (r) from
numeric into a factor via as.factor(r) always the error appears: "Error
in 1:ncol(r) : argument of length 0".
r <- raster(ncol=5, nrow=
At 14:23 06/05/2014, Viechtbauer Wolfgang (STAT) wrote:
Without the sample size of a study (i.e., either
the group sizes or the total sample size), you
cannot convert the p-value to a t-value or a
t-value to a d-value. And for studies where you
have the d-value but no sample size, you cannot
On May 5, 2014, at 11:42 AM, Timothy W. Cook wrote:
> I didn't find an attached XML file. Maybe the list removes attachments?
The list does not remove all attachments, It removes ones that are not among
the listed acceptable formats. XML is not among the list of acceptable formats.
If it had b
I believe this discussion should be taken offlist as it no longer
seems to be concerned with R.
-- Bert Gunter
Bert Gunter
Genentech Nonclinical Biostatistics
(650) 467-7374
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
H. Gilbert Welch
On Tu
Thank you very much for your illustration, Wolfgang! It helped me a lot.
And also thank you for the package-hint, Michael!
Now, I have re-checked the respective studies, and there still are a couple
of studies left, only stating cohens d, and the respective t-value and
p-value - sample and group s
In R, all number (integer float double precision) is treated as double
precision. Therefore, it has only 16 significant figures. To achieve higher
precision, you have to use program support long integer
在 2014年5月4日星期日UTC+8下午8时44分04秒,ARTENTOR Diego Tentor写道:
>
> Trying algorithm for products with
Thanks for the reply Don and Frede,
Your suggestions works perfectly!
Best
Adel
--
View this message in context:
http://r.789695.n4.nabble.com/list-files-accessing-subdirectory-as-relative-path-tp4689997p4690048.html
Sent from the R help mailing list archive at Nabble.com.
_
Thanks Jim. Roopa
---
Roopa Shree Subbaiaih
Post Doctoral Fellow
Department of Dermatology
School of Medicine
Case Western Reserve University
Cleveland, OH-44106
Tel:+1 216 368 0211
On Mon, May 5, 2014 at 6:02 PM, Jim Lemon wrote:
> On 05/06/2014 05:02 AM, R
The dataset is not large by database standards. Even in mySQL - not known
for its speed at multi-row querying - the queries you describe should
complete within a few seconds on even moderately recent hardware if your
indexes are reasonable.
What are your performance criteria for processing these
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Exactly,
which is why I am looking for something faster :-)-O
el
on 2014-05-06, 15:21 David R Forrest said the following:
> It sounds as if your underlying MySQL database is too slow for your
> purposes. Whatever you layer on top of it will be cons
It sounds as if your underlying MySQL database is too slow for your purposes.
Whatever you layer on top of it will be constrained by the underlying database.
To speed up the process significantly, you may need to do work on the database
backend part of the process.
Dave
On May 6, 2014, at 7
Without the sample size of a study (i.e., either the group sizes or the total
sample size), you cannot convert the p-value to a t-value or a t-value to a
d-value. And for studies where you have the d-value but no sample size, you
cannot compute the corresponding sampling variance. So, without ad
Thanks,
tried all of that, too slow.
el
on 2014-05-06, 12:00 Gabor Grothendieck said the following:
> On Tue, May 6, 2014 at 5:12 AM, Dr Eberhard Lisse wrote:
>> Jeff
>>
>> It's in MySQL, at the moment roughly 1.8 GB, if I pull it into a
>> dataframe it saves to 180MB. I work from the dataframe
On Tue, May 6, 2014 at 5:12 AM, Dr Eberhard Lisse wrote:
> Jeff
>
> It's in MySQL, at the moment roughly 1.8 GB, if I pull it into a
> dataframe it saves to 180MB. I work from the dataframe.
>
> But, it's not only a size issue it's also a speed issue and hence I
> don't care what I am going to use
On 06/05/2014, 1:19 AM, Ashis Deb wrote:
Hi all I had made a package in R-3.0.3 , and its running well ,
my issue is it is not running in other versions or R like
R-3.0.2/3.0.1 it is showing error like ---
Error: This is R 3.0.2, package ‘xxx’ needs >= 3.0.3
The onl
David,
this is quite slow :-)-O
el
on 2014-05-06, 10:55 David McPearson said the following:
[...]
> It seems like you are trying to extract a (relatively) small data set from a
> much larger SQL databaseWhy not do the SQL stiff in the database and the
> analysis *statsm graphics...) in R? Maybe
On Tue, 6 May 2014 10:12:50 +0100 Dr Eberhard Lisse wrote
> Jeff
>
> It's in MySQL, at the moment roughly 1.8 GB, if I pull it into a
> dataframe it saves to 180MB. I work from the dataframe.
>
> But, it's not only a size issue it's also a speed issue and hence I
> don't care what I am going to
On 05/06/2014 07:07 PM, Babak Bastan wrote:
Hi experts
I woul like to change my x-axis. Like this: 10,...,2,...,1
I am using this code:
r<-c(1:10)
plot(r, axes=FALSE, frame.plot=TRUE,xlim=c(10,1))
axis(1,at=10/seq(1:10))
axis(2, at=axTicks(2), axTicks(2))
but my x-sxis i still: 1,..., 2,...,1
On Tue, 6 May 2014 09:07:55 + Babak Bastan wrote
> Hi experts
>
> I woul like to change my x-axis. Like this: 10,...,2,...,1
>
> I am using this code:
>
> r<-c(1:10)
> plot(r, axes=FALSE, frame.plot=TRUE,xlim=c(10,1))
> axis(1,at=10/seq(1:10))
> axis(2, at=axTicks(2), axTicks(2))
>
> but
Hi,
Yes dplyr syntax is quite equivalent to SQL, although it is faster.
Another alternative you could consider is to use *data.table* which has a
syntax very similar to the way you select subset within a data.frame and in
terms of performance is faster (a bit) than sqldf.
You can get some idea of
Jeff
It's in MySQL, at the moment roughly 1.8 GB, if I pull it into a
dataframe it saves to 180MB. I work from the dataframe.
But, it's not only a size issue it's also a speed issue and hence I
don't care what I am going to use, as long as it is fast.
sqldf is easy to understand for me but it ta
Hi experts
I woul like to change my x-axis. Like this: 10,...,2,...,1
I am using this code:
r<-c(1:10)
plot(r, axes=FALSE, frame.plot=TRUE,xlim=c(10,1))
axis(1,at=10/seq(1:10))
axis(2, at=axTicks(2), axTicks(2))
but my x-sxis i still: 1,..., 2,...,10
What should I do?
[[alternative HT
In what format is this "growing" data stored? CSV? SQL? Log textfile? You say
you don't want to use sqldf, but you haven't said what you do want to use.
---
Jeff NewmillerThe . . Go L
Thank you.
My requirements are that simple. One table, 11 fields, of which 3 are
interesting, 30 Million records, growing daily by between 30.
And, yes I have spent an enormous amount of time reading these things,
but for someone not dealing with this professionally and/or on a daily
basis, t
This illustrates why you really do not want percents... (I never quite
understand why people do want them - I can understand raw counts wheh used in
teaching as a precursor to the concept of a density, but percentages is an odd
in-between sort of thing.)
Anyways, the scaling factor is the bin
Tim - the file is a hyperlink at the beginning of the message called
'sample.xml' or here's the hyperlink
http://r.789695.n4.nabble.com/file/n4689883/sample.xml
--
View this message in context:
http://r.789695.n4.nabble.com/Parsing-XML-file-to-data-frame-tp4689883p4690021.html
Sent from the R h
[.]
Sorry to prolong this thread, but I'm a bit astonished.
'bc' has been a really great tool when it was created (1975, at
Bell labs, according to Wikipedia) and made available, open
source, eventually, and I have been fond of it at the time.
On the other hand, we have had
38 matches
Mail list logo