On Apr 26, 2009, at 12:28 AM, Esmail wrote:
Hello all,
I have the following function call to create a matrix of POP_SIZE rows
and fill it with bit strings of size LEN:
pop=create_pop_2(POP_SIZE, LEN)
Are you construction a vector or a matrix? What are the dimensions of
your matrix?
http://search.r-project.org/cgi-bin/namazu.cgi?query=%22constrained+optimization%22&max=100&result=normal&sort=score&idxname=functions&idxname=Rhelp08
And that is only the help messages from the last two years.'
On Apr 26, 2009, at 12:00 AM, wrote:
Is there any R package addressing problems
On Apr 25, 2009, at 6:57 PM, reneepark wrote:
Hello,
I am using Dr. Harrell's design package to make a nomogram. I was
able to
make a beautiful one without stratifying, however, I will need to
stratify
to meet PH assumptions. This is where I go wrong, but I'm not sure
where.
Non-Stra
Hello all,
I have the following function call to create a matrix of POP_SIZE rows
and fill it with bit strings of size LEN:
pop=create_pop_2(POP_SIZE, LEN)
I have 3 questions:
(1) If I did
keep_pop[1:POP_SIZE] == pop[1:POP_SIZE]
to keep a copy of the original data structure before
Is there any R package addressing problems of constrained optimization ?
I have the following "apparently" simple problem:
Given a set V with fixed cardinality:nv
Given a set S whose cardinality is a parameter:nHat
Let the cardinality of the intersection S.and.V be:
Hello,
I am using Dr. Harrell's design package to make a nomogram. I was able to
make a beautiful one without stratifying, however, I will need to stratify
to meet PH assumptions. This is where I go wrong, but I'm not sure where.
David Winsemius wrote:
On Apr 25, 2009, at 9:25 AM, Frank E Harrell Jr wrote:
Emmanuel Charpentier wrote:
Le vendredi 24 avril 2009 à 14:11 -0700, ToddPW a écrit :
I'm trying to use either mice or norm to perform multiple imputation
to fill
in some missing values in my data. The data has so
Hi,
I'm using e1071 package to do fuzzy cluster analysis. My dataset (ra) has
5237 observations and 2 variables - depth and velocity. I used fuzzy cmeans
to create 6 fuzzy classes.
>ra.flcust6<-cmeans(ra,6,iter.max=100,verbose=F,dist="euclidean",method="cmea
ns",m=1.7,rate.par=NULL,weights=1)
The problem arises because your data.frame contains
no factors and plm:::lev2var implicitly assumes that its
input data.frame, x, contains both factor and non-factor
columns. That is because, e.g., the output of
sapply(zeroLongInput, func)
has class NULL, not the class of the output of func()
On Sat, 25 Apr 2009, Ron Burns wrote:
Dear all-
I am have trouble in using the model="ht" option in function plm from the plm
library. I am using
Package: plm Version: 1.1-1; R version 2.8.1 (2008-12-22) running on a FC-8
linux machine.
Here is what I am trying to do:
##--
There are functions for that, see ?unlink and ?file.rename (and several others
on that same page). They can be used to remove or rename the .Rdata file, but
there are several other ways to remove/rename the file as well. Under windows
you can click on the file menu, choose "Load Workspace", th
Hi Douglas.
So you want to check for correlation or regression ?
how many levels does "pre" have ?
you could subset the variables you want to check correlation on, by the pre
levels.
for example:
Let's say pre has two levels: 1 and 2. then you can do:
cor(y[pre == 1], x[pre == 1])
cor(y[pre == 2]
OK, I found the file and moved and renamed it And now it doesn't load
anymore.
It's a weird method though-there should be a command for that purpose.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the pos
Hi bogdanno,
Do right click on the R-icon (on your desktop) and then click on Properties.
On the window that pops up, go to Target and modify what it shows. On my
computer I have something like
"C:\Program Files\R\R-2.9.0pat\bin\Rgui.exe" --no-save --max-mem-size=2047M
which means that I do not w
Dear all-
I am have trouble in using the model="ht" option in function plm from
the plm library. I am using
Package: plm Version: 1.1-1; R version 2.8.1 (2008-12-22) running on a
FC-8 linux machine.
Here is what I am trying to do:
##--
I am using it from Windows XP. I have never used Linux or Unix.
I do not have a terminal window, so --no-restore doesn't work.(I
think)
I open R just by clicking on the R icon and a console opens up.
I have tried to move and rename the file but still R loads it-which
makes me think it was copied so
Hi, thanks for your prompt reply
In my situation, the dependent variable is "post-test" and the independent
variables are "pre" and "coh".
Howw would I find the correlation between coh and post with the effect of
"pre" regressed using your commands?
Tal Galili wrote:
>
> Hi Douglas
> I would
On Apr 25, 2009, at 9:25 AM, Frank E Harrell Jr wrote:
Emmanuel Charpentier wrote:
Le vendredi 24 avril 2009 à 14:11 -0700, ToddPW a écrit :
I'm trying to use either mice or norm to perform multiple
imputation to fill
in some missing values in my data. The data has some missing
values bec
Abdul:
First, read the posting guide. You will find a link to that one at the
bottom of all messages.
Also, read some of the documentation for R. Sorry, there is no way to
avoid that.
A very good source is the R web page (Google for the letter R, the link
will be at the top of the list).
Hi Douglas
I would go for a different command then aov.
something like:
?cor
or
?cor.test
To also get the p value of the correlation.
Cheers,
Tal
On Sat, Apr 25, 2009 at 8:27 AM, drmh wrote:
>
> (Have searched for this already)
>
> Hi,
>
> How do you find the strength of correlation between tw
heatmap.2() from gplots does not seem to accept a dendrogram produced by
the stats package function heatmap():
>
testHeatmap=heatmap(test[,1:40],Colv=NA,col=bluewhitered(256),labRow=testL,keep.dendro=TRUE)
# draws expected image
# to prove this dendro is OK, redraw with same function:
>
heat
Danke sehr, herr Professor ! This one escaped me (notably because it's a
trifle far from my current interests...).
Emmanuel Charpentier
Le samedi 25 avril 2009 à 08:25 -0500, Frank E Harrell Jr a écrit :
> Emmanuel Charpentier wrote:
> > Le vendredi 24 avri
(Have searched for this already)
Hi,
How do you find the strength of correlation between two variables using an
ANOVA table? "Pr(>F)" gives the statistical significance of the
association, but not the strength of the correlation.
See data (from R) below
Readable:
"Df"
Emmanuel Charpentier wrote:
Le vendredi 24 avril 2009 à 14:11 -0700, ToddPW a écrit :
I'm trying to use either mice or norm to perform multiple imputation to fill
in some missing values in my data. The data has some missing values because
of a chemical detection limit (so they are left censored
Le vendredi 24 avril 2009 à 14:11 -0700, ToddPW a écrit :
> I'm trying to use either mice or norm to perform multiple imputation to fill
> in some missing values in my data. The data has some missing values because
> of a chemical detection limit (so they are left censored). I'd like to use
> MI
The prior answer of Stefan is correct!
You have now two options:
1. To uninstall Tinn-R and to install it newly, but don not check the
extension related with latex
2. To change the association file inside of the Windows Operational System.
HTH,
JCFaria
--
View this message in context:
http://
On Apr 24, 2009, at 5:22 PM, joy_stat wrote:
Hi,
Can one tell me which procedure will fit an ordinal logistic
regression
model for longitudinal data set.
To be precise, I have both dichotomous and polytomous items. Also, I
would like to specify different covariance structures (unst
On Apr 25, 2009, at 2:46 AM, Santosh wrote:
Dear R-sians
Quick question...
1) From a flat (data) file with 100+ columns, how do I read specific
columns
instead of reading the entire dataset? I am trying to avoid reading
the
entire file followed by "subsetting".
If you are asking about h
Dear Adam,
ML is indeed hard-coded into the sem() function. Depending upon its
complexity, modifying the code to use a different "fitting function"
shouldn't be difficult, particularly if you are content not to supply
derivatives for the optimization. Providing for different "fitting
functions" is
On Sat, Apr 25, 2009 at 2:46 AM, Santosh wrote:
> Dear R-sians
> Quick question...
>
> 1) From a flat (data) file with 100+ columns, how do I read specific columns
> instead of reading the entire dataset? I am trying to avoid reading the
> entire file followed by "subsetting".
In read.table you c
Rodrigo Aluizio wrote:
Hi List,
I would appreciate any suggestion on how can I make a text I’ve inserted in
a plot show some contrast? With this I mean that I have a white text on a
plot and I would like to make a tiny border around it in black, so even
being small sized and the entire graphic b
Knut Krueger wrote:
Hi to all,
is it possible to show in anyway that point 1,1 is existing more than
1one time?
f.e.:
f<- data.frame("x"=c(1,3,5,6,1),"y"=c(1,2,3,4,1))
plot(f)
Hi Knut,
Taka a look at cluster.overplot and count.overplot in the plotrix package.
Jim
___
Greg Snow:
> Look at the pwr package, it has functions for 2 samples of different
> sizes.
>
> Hope this helps,
Great! Thanks.
--
Karl Ove Hufthammer
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read t
On Sat, Apr 25, 2009 at 6:56 AM, Duncan Murdoch wrote:
> On 24/04/2009 9:30 PM, greggal...@gmail.com wrote:
>>
>> Dear Sir or Madam:
>>
>> This is an extension to a earlier post, about looping through several
>> thousand files, and testing student's models against a different
>> data-set, called r
On 24/04/2009 9:30 PM, greggal...@gmail.com wrote:
Dear Sir or Madam:
This is an extension to a earlier post, about looping through several
thousand files, and testing student's models against a different
data-set, called rclnow, for "recall now".
The problem is, that the instructor never speci
I'm trying to use either mice or norm to perform multiple imputation to fill
in some missing values in my data. The data has some missing values because
of a chemical detection limit (so they are left censored). I'd like to use
MI because I have several variables that are highly correlated. In
Hi,
Can one tell me which procedure will fit an ordinal logistic regression
model for longitudinal data set.
To be precise, I have both dichotomous and polytomous items. Also, I
would like to specify different covariance structures (unstructured, ar1
etc) for trial runs.
Thanks
--
V
Dera Graham
I would highly be thankful if u help me how to do 1. Wilk's Lambda test
2. Box plots and 3. pooled within group standardization in discriminant
analysis with 6 variables in R. Actually I am new to R but I have an
assignment to analyse some questions only in R and there is drawback for
Dear Luc,
I stumbled on your unanswered question only this morning. Sorry for this
lateness...
Le jeudi 23 avril 2009 à 15:56 -0400, Luc Villandre a écrit :
> Dear R users,
>
> I am having trouble devising an efficient way to run a loess() function
> on all columns of a data.frame (with the x f
Uwe Ligges wrote:
Liang Zhang wrote:
I am just wondering how to solve this installation problem.
As I said, ask your admin to install suitable compilers.
And 'suitable compilers' in this case means to install the
GNU compiler collection gcc 4.x.y.
Contrary to gcc 3.x.y (which only has a F
On Fri, 24 Apr 2009 22:35:05 -0700 (PDT) Roslina Zakaria
wrote:
RZ> I installed new version of R and Tinn-R and I just wonder why all
RZ> my latex document change to Tinn-R symbol?
Because Tinn-R also is able to edit tex documents and you probably
missed the default document settings during the
Dear R-sians
Quick question...
1) From a flat (data) file with 100+ columns, how do I read specific columns
instead of reading the entire dataset? I am trying to avoid reading the
entire file followed by "subsetting".
2) is the a way to a call a column of dataframe through a variable.. e.g.
e,g
42 matches
Mail list logo