[R] empty cell when running GEE for binary data

2012-07-05 Thread Yue
Hi all,

My data are binary and within-subject correlated; there are three factors of
interest and two of them are within-subject. So, I have considered modelling
the data using a binomial regression with a GEE approach (which can be
achieved using the function geeglm). One problem is that I got some empty
cells, so I can't simply use logit as a link function. I was wondering if
you know any existing R solution for this? 

also, I tired to use "cauchit" as the link function, but it turned out that
within the function geeglm "cauchit" is not available. Any idea on this? 

Many thanks!
Yue

--
View this message in context: 
http://r.789695.n4.nabble.com/empty-cell-when-running-GEE-for-binary-data-tp4635483.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] empty cell when running GEE for binary data

2012-07-05 Thread Yue
hi Ozgur,

Thanks for your reply! by "empty cell" I mean there was a condition in which
all participants showed the same response (let's say response A). In other
words, the probability of A, according to the present data, was 100%; this
led to the empty cell for non-A. The odds or p(A) would be 1/0, so it's not
appropriate to use logit as the link function. 

I don't think, but not sure, gee would work, but i will try the other
package you suggested! 

Thanks!
Yue

--
View this message in context: 
http://r.789695.n4.nabble.com/empty-cell-when-running-GEE-for-binary-data-tp4635483p4635513.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] separation occuring in clustered data

2012-07-08 Thread Yue
 hi all,

Prior to this post I asked a question how to deal with binary clustered data
when the crosstab by variables shows an empty cell. I figured out later that
such an empty cell corresponds to what is called "separation". Fortunately,
R package "logistf" is developed to handle such a problem using a
penalised-likelihood method.  However, it seems that logistf is only
suitable when observations are completely independent, thus will be
problematic if used on my clustered data. I was wondering if anyone had
similar experience and has any suggestion for this? Thanks very much!

Yue

--
View this message in context: 
http://r.789695.n4.nabble.com/separation-occuring-in-clustered-data-tp4635770.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] exact logistic regression with correlated data?

2012-07-09 Thread Yue
hi all,

does anyone know if R can do exact logistic regression with correlated
binary data?

Thanks!!

Yue

--
View this message in context: 
http://r.789695.n4.nabble.com/exact-logistic-regression-with-correlated-data-tp4635900.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] firth's penalized likelihood bias reduction approach

2012-07-09 Thread Yue
hi all,

I have a binary data set and am now confronted with a "separation" issue. I
have two predictors, mood (neutral and sad) and game type (fair and
non-fair). By "separation", I mean that in the non-fair game, whereas 20%
(4/20) of sad-mood participants presented a positive response (coded as 1)
in the non-fair game, none of neutral-mood participants did so (0/20). Thus,
if drawing a 2x2 (mood x response, in the non-fair game) table, there was an
empty cell. I've learned that I can use Firth's penalized likelihood method
for bias reduction, which could be achieved using R packages "brglm" or
"logistf". However, I found the packages only deal with non-clustered data,
which is not the case for my data. I included game type as a within-subject
variable and mood as a between-subject variable, and I am interested in
their interaction. So, when involving the interaction term as a predictor, I
also need to control for within-subject correlation. Has anyone experience a
similar problem and how you solved it? or, any suggestion would be very much
appreciated!!! Thanks very much!!

Best,
Yue

--
View this message in context: 
http://r.789695.n4.nabble.com/firth-s-penalized-likelihood-bias-reduction-approach-tp4635890.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [R-pkgs] interplot: a new package for visualizing interactive effects

2015-07-10 Thread Hu, Yue

Dear colleagues,

We just published a package, "interplot",  in CRAN. It can be used to plots the 
conditional coefficients ("marginal effects") of variables included in 
multiplicative interaction terms for various models.  The installation 
instruction and more details about the package are available in 
http://cran.r-project.org/web/packages/interplot/vignettes/interplot-vignette.html.

Please contact me with any questions, bug reports, and comments.

Best,

Hu, Yue
Ph.D. Student, Political Science,
University of Iowa,
313 Schaeffer Hall,
20E Washington St.,
Iowa City, IA, 52242.


[[alternative HTML version deleted]]

___
R-packages mailing list
r-packa...@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ffsave.image() error with large objects

2010-02-11 Thread Yue Sheng
Hi, I have been using ffsave.image() to save mixture of ff and normal
objects in my workspace. e.g.

ffsave.image(file = "C:\output\saveobjects", rootpath =
"D:\fftempdir", safe = TRUE)

It works fine but once my workspace has large (~4GB) objects, I get the error:

Error in ffsave.image(file = "C:\output\savedobjects", rootpath =
"D:\fftempdir", safe = TRUE) : ff image could not be renamed and is
left in C:\output\savedobjects

Could this due to the zip.exe having problem zipping a large ff object?

OS: Windows XP
DiskFormat: NTFS
zip.exe from RTool.exe

Thanks for help.

YS

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] ffsave.image() error with large objects

2010-02-11 Thread Yue Sheng
So it seems that I've hit the 4GB zipfile limit. However, even if I
set compress = FALSE, it still tries to compress, and hence fails.

YS

On 2/11/10, Yue Sheng  wrote:
> Hi, I have been using ffsave.image() to save mixture of ff and normal
> objects in my workspace. e.g.
>
> ffsave.image(file = "C:\output\saveobjects", rootpath =
> "D:\fftempdir", safe = TRUE)
>
> It works fine but once my workspace has large (~4GB) objects, I get the
> error:
>
> Error in ffsave.image(file = "C:\output\savedobjects", rootpath =
> "D:\fftempdir", safe = TRUE) : ff image could not be renamed and is
> left in C:\output\savedobjects
>
> Could this due to the zip.exe having problem zipping a large ff object?
>
> OS: Windows XP
> DiskFormat: NTFS
> zip.exe from RTool.exe
>
> Thanks for help.
>
> YS
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Save workspace with ff objects

2009-11-26 Thread Yue Sheng
Hi All,

My script generates a mixture of normal and ff objects. I need to run the
script for different parameter settings. Very often I would like to save the
workspace for each parameter setting, so that I can get back to it later on.
Is there an easy way to do this, instead of needing to save individual ff
objects separately? I've tried the naive way of just saving the workspace,
only to find that ff objects are empty.

Thanks for help.

Yue Sheng

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] bigglm "update" with ff

2009-04-02 Thread Yue Sheng
Hi, since bigglm doesn't have update, I was wondering how to achieve
something like (similar to the example in ff package manual using biglm):

 first <- TRUE
 ffrowapply ({
  if (first) {
   first <- FALSE
   fit <- bigglm(eqn, as.data.frame(bigdata[i1:i2,,drop=FALSE]), chunksize =
1, family = binomial())
  } else {
   fit <- update(fit, as.data.frame(bigdata[i1:i2,,drop=FALSE]), chunksize =
1, family = binomial())
  }
 }, X=bigdata, VERBOSE = TRUE, BATCHSIZE = nmax)

Many thanks.
Yuesheng

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Failed path analysis using sem package

2013-09-15 Thread Yumeng Yue
Dear R users:

I am trying to run a path analysis using sem package in R. But I have 
encountered one problem, below is my code:

SEMEX<-read.csv("D:/Documents and Settings/z3409964/Desktop/Hospital 1.csv")
library(sem)
cov.matrixSEMEX<-cov(na.omit(SEMEX))

SEMEX<-specifyModel()
CWB->CWB13,NA,1
CWB->CWB23,deviance1
CWB->CWB33,deviance2
CWB->CWB43,deviance3
STA->STA13,NA,1
STA->STA23,citizenship1
STA->STA33,citizenship2
STA->STA43,citizenship3
STA->STA53,citizenship4
SA->SA13,NA,1
SA->SA23,surface1
SA->SA33,surface2
DA->DA13,NA,1
DA->DA23,deep1
DA->DA33,deep2
NA->NA13,NA,1
NA->NA23,negative1
NA->NA33,negative2
PA->PA13,NA,1
PA->PA23,positive1
PA->PA33,positive2
CWB13<->CWB13,error1
CWB23<->CWB23,error2
CWB33<->CWB33,error3
CWB43<->CWB43,error4
STA13<->STA13,error5 
STA23<->STA23,error6
STA33<->STA33,error7
STA43<->STA43,error8
STA53<->STA53,error9
SA13<->SA13,error10
SA23<->SA23,error11
SA33<->SA33,error12
DA13<->DA13,error13
DA23<->DA23,error14
DA33<->DA33,error15
NA13<->NA13,error16  
NA23<->NA23,error17  
NA33<->NA33,error18
PA13<->PA13,error19
PA23<->PA23,error20   
PA33<->PA33,error21 
CWB<->CWB,var1
STA<->STA,var2
SA<->SA,var3
DA<->DA,var4
PA<->PA,var5
NA<->NA,var6
SA->NA,beta1
NA->CWB,beta2
DA->PA,beta3
PA->OCB,beta4
SA->PA,beta5
DA->NA,beta6
OCB<->CWB,cov2

SEMEX<-sem(SEMEX,cov.matrixSEMEX, nrow(SEMEX))
summary(SEMEX,fit.indices=c("CFI","GFI","AGFI","RMSEA"))

Yet the output shows the following:

Error in summary.objectiveML(SEMEX, fit.indices = c("CFI", "GFI", "AGFI",  : 
  coefficient covariances cannot be computed
In addition: Warning message:
In vcov.sem(object, robust = robust, analytic = analytic.se) :
   singular Hessian: model is probably underidentified.

Can you help figure out what is going on? Thank you very much

Brad
  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using metafor for meta-analysis of before-after studies

2013-04-26 Thread Qiang Yue
Hello, Dr. Viechtbauer.
 
I am trying to perform a meta-analyis on a group of before-after studies using 
Metafor. I read your webpage including your correspondence with Dr. Dewey 
(https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html), who also 
conducted a similar study. These information is very hepful, but I have one 
additonal question which I wonder if you can give me some instruction. The 
question is as follow:
 
These studies which we are trying to analyze are performed on the same subject 
before and after the adminstration of intervention. Most studies reported the 
the Mean¡ÀSD of percentage change, i.e., the Mean¡ÀSD of (value of 
¡®after¡¯-value of ¡®before¡¯)/value of ¡®before¡¯¡Á100%£¬without reporting the 
Mean¡ÀSD of value of ¡®after¡¯ or value of ¡®before¡¯. So I want to know if it 
is possible to perform meta-analyis using the value of percentage change, and 
if it is possible to calculate the ¡®sdi¡¯ (the standard deviation of the 
change scores) using the SD of percentage change.
 
Thank you very much, I am looking forward to your reply.
 
With best wishes.
 



Qiang Yue M.D.
Visiting Scholar of IMHR, University of Ottawa
1145 Carling Avenue, K1Z 7K4, Ottawa, ON, Canada
Tel: 613-722-6521 ext. 6554
Associate Professor of Radiology
Department of Radiology, West China Hospital, Sichuan University
Chengdu, 610041, China
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using metafor for meta-analysis of before-after studies

2013-04-27 Thread Qiang Yue
Hello, Michael.

Thanks for your kind and rapid reply, and sorry for the inconvenience of 
characters. 

Yes,  the primary studies reported the n, the mean percentage change and its 
standard deviation, but some did not report the original value of before- or 
after-intervention, and there is only one group because these are uncontrolled  
before-and-after studies. So you mean we can not perform meta-analysis based on 
the mean percentage change and its standard deviation? For studies which have 
reported the original value  of before- and after-intervention, can I just 
simply treat the before-intervention values as the values of control group and 
treat the after-intervention values as the values of intervention group?  I 
will be grateful if you can send me some of your publications of meta-analysis 
on before-after study, thus I can learn how to perform the analysis.

With best regards.



Qiang Yue

From: Michael Dewey
Date: 2013-04-27 07:28
To: qiangmoon; wvb
CC: r-help
Subject: Re: [R] using metafor for meta-analysis of before-after studies
At 03:27 27/04/2013, Qiang Yue wrote:
>Hello, Dr. Viechtbauer.
>
>I am trying to perform a meta-analyis on a group 
>of before-after studies using Metafor. I read 
>your webpage including your correspondence with 
>Dr. Dewey 
>(https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html), 
>who also conducted a similar study. These 
>information is very hepful, but I have one 
>additonal question which I wonder if you can 
>give me some instruction. The question is as follow:
>
>These studies which we are trying to analyze are 
>performed on the same subject before and after 
>the adminstration of intervention. Most studies 
>reported the the Mean¡ÀSD of percentage change, 
>i.e., the Mean¡ÀSD of (value of ¡®after¡¯-value 
>of ¡®before¡¯)/value of 
>¡®before¡¯¡Á100%£¬without reporting the Mean¡ÀSD 
>of value of ¡®after¡¯ or value of ¡®before¡¯. So 
>I want to know if it is possible to perform 
>meta-analyis using the value of percentage 
>change, and if it is possible to calculate the 
>¡®sdi¡¯ (the standard deviation of the change 
>scores) using the SD of percentage change.

Unfortunately not all the characters in your 
email appeared correctly here but if I understand 
you correctly the primary studies have reported 
(for each group?) mean percentage change and its 
standard deviation (and presumably the n). So you 
just treat them like any other mean and standard deviation.

If I understand the very last part correctly you 
would need more information than we have to back 
calculate change on the original scale from change on the percentage scale.

>
>Thank you very much, I am looking forward to your reply.
>
>With best wishes.
>
>
>
>
>Qiang Yue M.D.
>Visiting Scholar of IMHR, University of Ottawa
>1145 Carling Avenue, K1Z 7K4, Ottawa, ON, Canada
>Tel: 613-722-6521 ext. 6554
>Associate Professor of Radiology
>Department of Radiology, West China Hospital, Sichuan University
>Chengdu, 610041, China  [[alternative HTML version deleted]]

Michael Dewey
i...@aghmed.fsnet.co.uk
http://www.aghmed.fsnet.co.uk/home.html
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using metafor for meta-analysis of before-after studies

2013-04-28 Thread Qiang Yue
Hello, Michael.

Thanks very much, your suggestion is really helpful, and hopefully will address 
our issue. I will also read the suggested references to learn more about 
meta-analysis. 

With best wishes.




Qiang Yue

From: Michael Dewey
Date: 2013-04-28 06:30
To: qiangmoon; Michael Dewey; wvb
CC: r-help
Subject: Re: Re: [R] using metafor for meta-analysis of before-after studies
At 16:50 27/04/2013, Qiang Yue wrote:
>
>Hello, Michael.
>
>Thanks for your kind and rapid reply, and sorry 
>for the inconvenience of characters.
>
>Yes,  the primary studies reported the n, the 
>mean percentage change and its standard 
>deviation, but some did not report the original 
>value of before- or after-intervention, and 
>there is only one group because these are 
>uncontrolled  before-and-after studies.

If you have a series of non-comparative studies 
then you can proceed. Of course all you are 
estimating is subject to the usual caveats about 
uncontrolled before and after studies. Look in 
the escalc documentation under the heading 
Outcome measures for individual groups.

>  So you mean we can not perform meta-analysis 
> based on the mean percentage change and its standard deviation?

That is not what I said, or what I intended to 
say although my message may not have been 
directly relevant as I did not grasp the sort of study you were dealing with.

>For studies which have reported the original 
>value  of before- and after-intervention, can I 
>just simply treat the before-intervention values 
>as the values of control group and treat the 
>after-intervention values as the values of intervention group?

That would seem a strange thing to want to do.

>  I will be grateful if you can send me some of 
> your publications of meta-analysis on 
> before-after study, thus I can learn how to perform the analysis.

I think you need to get some local help on the 
ideas behind meta-analysis or read a good book

@BOOK{sutton00,
   author = {Sutton, A J and Abrams, K R and Jones, D R and Sheldon, T A and
Song, F},
   year = 2000,
   title = {Methods for meta-analysis in medical research},
   publisher = {Wiley},
   address = {Chichester},
   keywords = {meta-analysis, general}
}

might be a good choice given your affiliation.

>
>With best regards.
>
>Qiang Yue
>
>From: <mailto:i...@aghmed.fsnet.co.uk>Michael Dewey
>Date: 2013-04-27 07:28
>To: <mailto:qiangm...@gmail.com>qiangmoon; <mailto:w...@wvbauer.com>wvb
>CC: <mailto:r-help@r-project.org>r-help
>Subject: Re: [R] using metafor for meta-analysis of before-after studies
>At 03:27 27/04/2013, Qiang Yue wrote:
> >Hello, Dr. Viechtbauer.
> >
> >I am trying to perform a meta-analyis on a group
> >of before-after studies using Metafor. I read
> >your webpage including your correspondence with
> >Dr. Dewey
> >(https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html),
> >who also conducted a similar study. These
> >information is very hepful, but I have one
> >additonal question which I wonder if you can
> >give me some instruction. The question is as follow:
> >
> >These studies which we are trying to analyze are
> >performed on the same subject before and after
> >the adminstration of intervention. Most studies
> >reported the the Mean¡ÀSD of percentage change,
> >i.e., the Mean¡ÀSD of (value of ¡®after¡¯-value
> >of ¡®before¡¯)/value of
> >¡®before¡¯¡Á100%£¬without reporting the 
> >Mean¡ÀSD
> >of value of ¡®after¡¯ or value of ¡®before¡¯. So
> >I want to know if it is possible to perform
> >meta-analyis using the value of percentage
> >change, and if it is possible to calculate the
> >¡®sdi¡¯ (the standard deviation of the change
> >scores) using the SD of percentage change.
>
>Unfortunately not all the characters in your
>email appeared correctly here but if I understand
>you correctly the primary studies have reported
>(for each group?) mean percentage change and its
>standard deviation (and presumably the n). So you
>just treat them like any other mean and standard deviation.
>
>If I understand the very last part correctly you
>would need more information than we have to back
>calculate change on the original scale from change on the percentage scale.
>
> >
> >Thank you very much, I am looking forward to your reply.
> >
> >With best wishes.
> >
> >
> >
> >
> >Qiang Yue M.D.
> >Visiting Scholar of IMHR, University of Ottawa
> >1145 Carling Avenue, K1Z 7K4, Ottawa, ON, Canada
> >Tel: 613-722-6521 ext. 6554
> >Associate Professor of Radiology
>

Re: [R] using metafor for meta-analysis of before-after studies (escalc, SMCC)

2013-05-22 Thread Qiang Yue
Dear Dr. Viechtbauer:

Thank you very much for sparing your precious time to answer my question. I 
still want to make sure for the third question below:  for studies which only 
reported percentage changes (something like: the metabolite concentration 
increased by 20%+/-5% after intervention), we can not use the percentage change 
to calculate SMCC, but have to get the raw change first?

With best wishes.



Qiang Yue

From: Viechtbauer Wolfgang (STAT)
Date: 2013-05-21 10:09
To: Moon Qiang; r-help
Subject: RE: [R] using metafor for meta-analysis of before-after studies 
(escalc, SMCC)
Please see my answers below.

Best,
Wolfgang

> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Moon Qiang
> Sent: Thursday, May 16, 2013 19:12
> To: r-help
> Subject: [R] using metafor for meta-analysis of before-after studies
> (escalc, SMCC)
> 
> Hello.
> 
> I am trying to perform meta-analysis on some before-after studies. These
> studies are designed to clarify if there is any significant metabolic
> change before and after an intervention. There is only one group in these
> studies, i.e., no control group. I followed the e-mail communication of
> R-help (https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html ) and
> the Metafor Manual (version 1.8-0, released 2013-04-11, relevant contents
> can be found on pages of 59-61 under 'Outcome Measures for Individual
> Groups '). I made a trial analysis and attached the output here, I wonder
> if anyone can look through it and give me some comments.
>  I have three questions about the analysis:
> 
> 1) Most studies reported the before-and-after raw change as Mean+/-SD, but
> few of them have reported the values of before-intervention (mean_r and
> sd_r) and the values of after-intervention (mean_s and sd_s), and none of
> them reported the r value (correlation for the before- and after-
> intervention measurements). Based on the guideline of the Metafor manual,
> I
> set the raw mean change as m1i (i.e., raw mean change=mean_s=m1i), and set
> the standard deviation of raw change as sd1i (i.e., the standard deviation
> of raw change =sd_s=sd1i), and set all other arguments including m2i,
> sd2i,
> ri as 0, and then calculated the standardized mean change using change
> score (SMCC). I am not sure if all these settings are correct.

This is correct. The escalc() function still will compute (m1i-m2i)/sqrt(sd1i^2 
+ sd2i^2 - 2*ri*sd1i*sd2i), but since m2i=sd2i=ri=0, this is equivalent to 
mean_change / SD_change, which is what you want.

Make sure that mean_s is NOT the standard error (SE) of the change scores, but 
really the SD.

> 2) A few studies have specified individual values of m1i, m2i, sd1i, sd2i
> ,
> but did not report the change score or its sd. So can I set r=0 and use
> these values to calculate SMCC? Since SMCC is not calculated in the same
> way like 1), will this be a problem?

Yes, this will be a problem, since you now really assume that r=0, which is not 
correct. Maybe you can back-calculate r from other information (e.g., the p or 
t value from a t-test -- see 
https://stat.ethz.ch/pipermail/r-help/2012-April/308946.html). Or you could try 
to get r from the authors (then you could also just directly ask for the change 
score mean and SD). If that is not successful, you will have to impute some 
kind of reasonable value for r and do a sensitivity analysis in the end.

> 3) some studies reported the percentage mean changes instead of raw mean
> change (percentage change=(value of after-intervention - value of before
> intervention) / value of before intervention), I think it may not be the
> right way to simply substitute the raw mean change with the percentage
> mean
> changes. Is there any method to deal with this problem?

Don't know anything off the top of my head.

> Any comments are welcome.
> 
> With best regards.
>   --
>  Qiang Yue
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using metafor for meta-analysis of before-after studies (escalc, SMCC)

2013-05-23 Thread Qiang Yue
 Dear Dr. Viechtbauer:

Thanks so much! Now all these issues are clear. 

With best regards.




Qiang Yue

From: Viechtbauer Wolfgang (STAT)
Date: 2013-05-23 05:06
To: qiangmoon; r-help
Subject: RE: RE: [R] using metafor for meta-analysis of before-after studies 
(escalc, SMCC)
The mean percentage change and the raw mean change are not directly comparable, 
even after standardization based on the SD of the percentage change or raw 
change values. So, I would not mix those in the same analysis.

Best,
Wolfgang

> -Original Message-
> From: Qiang Yue [mailto:qiangm...@gmail.com]
> Sent: Wednesday, May 22, 2013 20:38
> To: Viechtbauer Wolfgang (STAT); r-help
> Subject: Re: RE: [R] using metafor for meta-analysis of before-after
> studies (escalc, SMCC)
> 
> Dear Dr. Viechtbauer:
> 
> Thank you very much for sparing your precious time to answer my question.
> I still want to make sure for the third question below:  for studies which
> only reported percentage changes (something like: the metabolite
> concentration increased by 20%+/-5% after intervention), we can not use
> the percentage change to calculate SMCC, but have to get the raw change
> first?
> 
> With best wishes.
> 
> Qiang Yue
> 
> From: Viechtbauer Wolfgang (STAT)
> Date: 2013-05-21 10:09
> To: Moon Qiang; r-help
> Subject: RE: [R] using metafor for meta-analysis of before-after studies
> (escalc, SMCC)
> Please see my answers below.
> 
> Best,
> Wolfgang
> 
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> > On Behalf Of Moon Qiang
> > Sent: Thursday, May 16, 2013 19:12
> > To: r-help
> > Subject: [R] using metafor for meta-analysis of before-after studies
> > (escalc, SMCC)
> >
> > Hello.
> >
> > I am trying to perform meta-analysis on some before-after studies. These
> > studies are designed to clarify if there is any significant metabolic
> > change before and after an intervention. There is only one group in thes
> e
> > studies, i.e., no control group. I followed the e-mail communication of
> > R-help (https://stat.ethz.ch/pipermail/r-help/2012-
> April/308946.html ) and
> > the Metafor Manual (version 1.8-0, released 2013-04-
> 11, relevant contents
> > can be found on pages of 59-61 under 'Outcome Measures for Individual
> > Groups '). I made a trial analysis and attached the output here, I wonde
> r
> > if anyone can look through it and give me some comments.
> >  I have three questions about the analysis:
> >
> > 1) Most studies reported the before-and-after raw change as Mean+/-
> SD, but
> > few of them have reported the values of before-intervention (mean_r and
> > sd_r) and the values of after-
> intervention (mean_s and sd_s), and none of
> > them reported the r value (correlation for the before- and after-
> > intervention measurements). Based on the guideline of the Metafor manual
> ,
> > I
> > set the raw mean change as m1i (i.e., raw mean change=mean_s=m1i), and s
> et
> > the standard deviation of raw change as sd1i (i.e., the standard deviati
> on
> > of raw change =sd_s=sd1i), and set all other arguments including m2i,
> > sd2i,
> > ri as 0, and then calculated the standardized mean change using change
> > score (SMCC). I am not sure if all these settings are correct.
> 
> This is correct. The escalc() function still will compute (m1i-
> m2i)/sqrt(sd1i^2 + sd2i^2 -
>  2*ri*sd1i*sd2i), but since m2i=sd2i=ri=0, this is equivalent to mean_chan
> ge / SD_change, which is what you want.
> 
> Make sure that mean_s is NOT the standard error (SE) of the change scores,
>  but really the SD.
> 
> > 2) A few studies have specified individual values of m1i, m2i, sd1i, sd2
> i
> > ,
> > but did not report the change score or its sd. So can I set r=0 and use
> > these values to calculate SMCC? Since SMCC is not calculated in the same
> > way like 1), will this be a problem?
> 
> Yes, this will be a problem, since you now really assume that r=0, which i
> s not correct. Maybe you can back-
> calculate r from other information (e.g., the p or t value from a t-test -
> - see https://stat.ethz.ch/pipermail/r-help/2012-
> April/308946.html). Or you could try to get r from the authors (then you c
> ould also just directly ask for the change score mean and SD). If that is
> not successful, you will have to impute some kind of reasonable value for
> r and do a sensitivity analysis in the end.
> 
> > 3) some studies reported the percentage mean changes instead of raw mean
> > change (percentage change=(

[R] Fontconfig error in R 3.10 when running

2014-04-17 Thread Yue Li
Dear List,

After updating to R 3.10 and Bioconductor 2.14, I'm having a rather frustrating 
error whenever I try using function like cairo_ps to save eps figures as 
demonstrated in the following example:

cairo_ps('test.eps')
plot(c(1:10),c(1:10))
Fontconfig error: Cannot load default config file
dev.off()

The plot displays distorted text (attached), and I just couldn't figure out 
where the fix is after google for hours. Any helps will be very much 
appreciated.

Many thanks,
Yue

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fontconfig error in R 3.10 when running

2014-04-17 Thread Yue Li
Dear List,

After updating to R 3.10 and Bioconductor 2.14, I'm having a rather frustrating 
error whenever I try using function like cairo_ps to save eps figures as 
demonstrated in the following example:

cairo_ps('test.eps')
plot(c(1:10),c(1:10))
Fontconfig error: Cannot load default config file
dev.off()

The plot displays distorted text (attached), and I just couldn't figure out 
where the fix is after google for hours. Any helps will be very much 
appreciated.

Many thanks,
Yue

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] source file for rnorm

2010-12-20 Thread Yue Zhang
Dear Sir/Madam:

I am sorry if this is a simple question. I have downloaded the source code for 
R but could not find the source file that has the rnorm function definition. 
Could you please let me know which file has the function? Also, in general, how 
do I find the source file of a given function?

Thanks.

Yue Zhang, CFA
Cohen & Steers Capital Management, Inc.
280 Park Ave., 10th Floor
New York, NY 10017
Tel: (212)796-9370
Fax: (212)319-8238
Email: yzh...@cohenandsteers.com



The information transmitted is only for the intended recipient and may contain 
confidential, privileged, copyrighted, or otherwise restricted material. It may 
not be reproduced or retransmitted without permission. If you have received 
this in error, please contact the sender, delete this transmission and dispose 
of any printed material as your review, retransmission, dissemination or other 
use of, or taking any action in reliance upon, this information is not 
authorized and may be prohibited by law. Cohen & Steers may review and retain 
this message and any addressed to its sender. Opinions or views herein do not 
necessarily reflect those of Cohen & Steers, Inc., its subsidiaries, or 
affiliates.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to let Rterm be vi style under windows?

2010-08-01 Thread Yue Wu

Hi list,

As title, under windows, the methord of ~/.inputrc won't work, dunno how  
to let Rterm can be vi style?


--
Regards,
Yue Wu

Key Laboratory of Modern Chinese Medicines
Department of Traditional Chinese Medicine
China Pharmaceutical University
No.24, Tongjia Xiang Street, Nanjing 210009, China

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dealing with a lot of parameters in a function

2010-08-02 Thread Shentu, Yue
Hi all,
 I'm trying to define and log-likelihood function to work with MLE.
There will be parameters like mu_i, sigma_i, tau_i, ro_i, for i between
1 to 24. Instead of listing all the parameters, one by one in the
function definition, is there a neat way to do it in R ?  The example is
as follows:

 ll<- function(mu1=-0.5,b=1.2,tau_1=0.5,sigma_1=0.5,ro_1=0.7)
{ if (tau1>0 && ro<1 && ro>-1) 
 
-sum(dmnorm(cbind(x,y),c(mu1,b*mu1),matrix(c(tau_1^2,ro_1*tau_1*sigma_1,
ro_1*tau_1*sigma_1,sigma_1^2),nrow=2),log=T))
   else NA
}

but now I need to have the sum of 24 of these negative log-likelihood.

Thanks.

Yue
Notice:  This e-mail message, together with any attachme...{{dropped:11}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error msg in optim : (list) object cannot be coerced to type 'double'

2010-08-02 Thread Shentu, Yue
Hi all, 
   Thanks to the suggestion from Nikhil to use vector parameters, I
have made some progress with my MLE code. I have however encountered
another problem with the optimization step. The code is as follows: 
 
 est.x<-as.vector(tapply(Diff_A1c_6_0,factor(Study_Compound_ID),mean))
 ll<-
function(mu=est.x,a=0,b=1.2,tau=rep(0.5,24),sigma=rep(0.5,24),ro=rep(0.7
,24))
{ if (min(tau)>0 && min(sigma)>0 && min(ro)>-1 && max(ro)<1)
  {llsm<-0
   for (i in 1:24)
   llsm <-
llsm-sum(dmnorm(as.matrix(subset(A1c_data_all,Study_Compound_ID==levels(
factor(Study_Compound_ID))[i],select=c(Diff_A1c_6_0,Diff_A1c_12_0))),c(m
u[i],a+b*mu[i]),matrix(c(tau[i]^2,ro[i]*tau[i]*sigma[i],ro[i]*tau[i]*sig
ma[i],sigma[i]^2),nrow=2),log=T))
   return(llsm)
}  
 else NA
}
fit<-mle(ll) 
 
Then I get the following error message: 
 
Error in optim(start, f, method = method, hessian = TRUE, ...) : 
  (list) object cannot be coerced to type 'double' 
 
The function itself works fine when I applied various values of
parameters to it, i.e. it does output the negative log-likelihood given
the parameters. There also was a short dicussion about it a few years
ago on this mailing list, but no final resolution was reached for it.
 
Yue
 
Notice:  This e-mail message, together with any attachme...{{dropped:14}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to let Rterm be vi style under windows?

2010-08-04 Thread Yue Wu

Hi list,

As title, under windows, the method of ~/.inputrc won't work, is it  
possible to let Rterm be vi style on windows?


P.S. I've sent it serveral days ago, get no response, so I send it again  
to check if I've failed to send it.


--
Regards,
Yue Wu

Key Laboratory of Modern Chinese Medicines
Department of Traditional Chinese Medicine
China Pharmaceutical University
No.24, Tongjia Xiang Street, Nanjing 210009, China

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] [w32] Bug? When trying to auto-complete, rest of the input from cursor is wiped out.

2011-04-02 Thread Yue Wu
Hi, list,

I'm using Rterm on windows, I find that when I try to complete the code
with , the rest of the input from cursor will be wiped out.

How to reproduce(`|' means the cursor position):

> write.table(file='foo',fileEn|="")

Then hit ,

R will try to complete the code, but the rest is gone:

> write.table(file='foo',fileEncoding|

No this issue when on *nix.

-- 
Regards,
Yue Wu

Key Laboratory of Modern Chinese Medicines
Department of Traditional Chinese Medicine
China Pharmaceutical University
No.24, Tongjia Xiang Street, Nanjing 210009, China

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] bigglm binomial negative fitted value

2012-05-31 Thread Yue Guan
Hi, there

Since glm cannot handle factors very well. I try to use bigglm like this:

logit_model <- bigglm(responser~var1+var2+var3, data, chunksize=1000,
family=binomial(), weights=~trial, sandwich=FALSE)

fitted <- predict(logit_model, data)

only var2 is factor, var1 and var3 are numeric.

I expect fitted should be a vector of value falls in (0,1)

However, I get something like this:
str(fitted)
 num [1:260617, 1] -0.0564 -0.0564 -0.1817 -0.1842 -0.1852 ...
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:260617] "1" "2" "3" "4" ...
  ..$ : NULL

Anyone can help on this case? Thank you in advance.

Best,

--Yue

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] bigglm binomial negative fitted value

2012-05-31 Thread Yue Guan
Thank you very much. I do overlook something.

On Thu, May 31, 2012 at 5:20 PM, Thomas Lumley  wrote:
> On Fri, Jun 1, 2012 at 1:17 AM, Yue Guan  wrote:
>> Hi, there
>>
>> Since glm cannot handle factors very well. I try to use bigglm like this:
>>
>> logit_model <- bigglm(responser~var1+var2+var3, data, chunksize=1000,
>> family=binomial(), weights=~trial, sandwich=FALSE)
>>
>> fitted <- predict(logit_model, data)
>>
>> only var2 is factor, var1 and var3 are numeric.
>>
>> I expect fitted should be a vector of value falls in (0,1)
>>
>> However, I get something like this:
>> str(fitted)
>>  num [1:260617, 1] -0.0564 -0.0564 -0.1817 -0.1842 -0.1852 ...
>>  - attr(*, "dimnames")=List of 2
>>  ..$ : chr [1:260617] "1" "2" "3" "4" ...
>>  ..$ : NULL
>>
>
> As the help says, the default is predictions of the linear predictor.
> To get predictions of the probability, use type="response"
>
>   -thomas
>
> --
> Thomas Lumley
> Professor of Biostatistics
> University of Auckland

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Correlated Multivariate Distribution Generator

2011-07-26 Thread Yue Yu
Dear R User,

I am wondering if there is a way to generate correlated multivariate
non-normal distribution?

For example, I want to generate four correlated negative binomial
series with parameters r=10, p=0.2, based on the correlation
coefficient matrix
|   1   0.9  0.8  0.8 |
| 0.9 1  0.8  0.8 |
| 0.8  0.8 1  0.9 |
| 0.8  0.8  0.9 1 |

Thank a lot.

Best,

Yue Yu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Correlated Multivariate Distribution Generator

2011-07-27 Thread Yue Yu
Thanks, Enrico and Harold.

I have searched the archives for this topic, but all I can find is
about the multivariate normal or uniform generator, which doesn't help
in my case.

Harold's transformation is good for getting the correlation, but the
negative binomial distribution will be changed after transformation,
won't keep the desired parameters r and p.

Enrico's method is brilliant for rank correlation, is there any
algorithm for linear correlation? Even some references for
multivariate negative binomial generator will help.

Your help are really appreciated

Best,

Yue Yu

On Wed, Jul 27, 2011 at 11:31, Doran, Harold  wrote:
> Do you mean something like this?
>
>> cors <- matrix(c(1, .9, .8, .8, .9, 1, .8, .8, .8, .8, 1, .9, .8, .8, .9, 
>> 1), 4)
>> L <- chol(cors)
>> N <- 1000
>> dat <- cbind(rnbinom(N, mu = 4, size = 1), rnbinom(N, mu = 4, size = 1), 
>> rnbinom(N, mu = 4, size = 1), rnbinom(N, mu = 4, size = 1))
>> result <- dat %*% L
>> cor(dat)
>           [,1]        [,2]        [,3]       [,4]
> [1,] 1.  0.04227103  0.00358846 0.02860722
> [2,] 0.04227103  1. -0.03122457 0.03070221
> [3,] 0.00358846 -0.03122457  1. 0.04621639
> [4,] 0.02860722  0.03070221  0.04621639 1.
>
>> cor(result)
>          [,1]      [,2]      [,3]      [,4]
> [1,] 1.000 0.9021538 0.8183199 0.8158886
> [2,] 0.9021538 1.000 0.8114415 0.8152286
> [3,] 0.8183199 0.8114415 1.000 0.9145778
> [4,] 0.8158886 0.8152286 0.9145778 1.000
>
>
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
>> Behalf Of Yue Yu
>> Sent: Tuesday, July 26, 2011 11:01 PM
>> To: r-help@r-project.org
>> Subject: [R] Correlated Multivariate Distribution Generator
>>
>> Dear R User,
>>
>> I am wondering if there is a way to generate correlated multivariate
>> non-normal distribution?
>>
>> For example, I want to generate four correlated negative binomial
>> series with parameters r=10, p=0.2, based on the correlation
>> coefficient matrix
>> |   1   0.9  0.8  0.8 |
>> | 0.9     1  0.8  0.8 |
>> | 0.8  0.8     1  0.9 |
>> | 0.8  0.8  0.9     1 |
>>
>> Thank a lot.
>>
>> Best,
>>
>> Yue Yu
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to find the minimum

2008-04-28 Thread Bin Yue

Dear friends:
 I am a new R user, and not very good at statistics and maths
 I find it difficult to find the minimum of this item:
 ((p*(b-1)-1)*(p+1)^(b-1)+1)/p^2

It seems that the method optim can find the minimum. I tried several
times , but R constanly tells that " failure to find b or p"
Would you please give some suggestions?
Thanks in advance.
Best wishes,
    Bin Yue 

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/how-to-find-the-minimum-tp16952701p16952701.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] What is the "deviance " and the "ch ange in deviance" of a logistic regression?

2007-11-25 Thread Bin Yue

Dear all:
 I am confounded by the "deviance" and the "change in deviance" of a
logistic regression ?
 I have a object like this:
   summary(glm1)

Call:
glm(formula = status ~ no.filter + light.filter_2, family = binomial, 
data = re)

Deviance Residuals: 
Min   1Q   Median   3Q  Max  
-0.9774  -0.8063  -0.7586   1.4864   3.0635  

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)-1.155042   0.087014 -13.274  < 2e-16 ***
no.filter   0.057531   0.016475   3.492 0.000479 ***
light.filter_2 -0.037554   0.008619  -4.357 1.32e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 3655.4  on 3172  degrees of freedom
Residual deviance: 3612.5  on 3170  degrees of freedom
  (362 observations deleted due to missingness)
AIC: 3618.5

Number of Fisher Scoring iterations: 5

> deviance(glm1)
[1] 3612.516

is the "deviance(glm1)" the "deviance" ?
If yes, what is the "change in deviance"? Is it "residual
deviance-null.deviance"?
Thanks all .

Best wishes,
Bin Yue 

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/%EF%BC%B7hat-is-the-%22deviance-%22--and-the-%22change-in-deviance%22-of-a-logistic-regression--tf4870005.html#a13935092
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] logistic regression using "glm",which "y" is set to be "1"

2007-12-05 Thread Bin Yue

Dear friends :
using the "glm" function and setting family=binomial, I got a list of
coefficients.
The coefficients reflect the effects  of predicted variables on the
probability of the response to be "1".
My response variable consists of  "A" and "D" . I don't know which level of
the response was set to be 1.
is the first element of the response set to be 1?
   Thank all in advance.
   Regards,

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/logistic-regression-using-%22glm%22%2Cwhich-%22y%22-is-set-to-be-%221%22-tf4953617.html#a14185060
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression using "glm",which "y" is set to be "1"

2007-12-05 Thread Bin Yue

Dear Marc Schwartz:
 When I ask R2.6.0 for windows, the information it gives does not contain
much about family=binomial .
 You said that there is a detail section of "?glm". I want to read it
thoroughly. Could  you tell me where and how I can find the detail section
of "?glm".
   Thank you very much .
   Best regards,
 Bin Yue
  
 

Marc Schwartz wrote:
> 
> 
> On Wed, 2007-12-05 at 18:06 -0800, Bin Yue wrote:
>> Dear friends :
>> using the "glm" function and setting family=binomial, I got a list of
>> coefficients.
>> The coefficients reflect the effects  of predicted variables on the
>> probability of the response to be "1".
>> My response variable consists of  "A" and "D" . I don't know which level
>> of
>> the response was set to be 1.
>> is the first element of the response set to be 1?
>>Thank all in advance.
>>Regards,
>> 
>> -
>> Best regards,
>> Bin Yue
> 
> 
> As per the Details section of ?glm:
> 
> For binomial and quasibinomial families the response can also be
> specified as a factor (when the first level denotes failure and all
> others success) ...
> 
> 
> So use:
> 
>   levels(response.variable)
> 
> and that will give you the factor levels, where the first level is 0 and
> the second level is 1. 
> 
> If you work in a typical English based locale with default alpha based
> level ordering, it will likely be A (Alive?) is 0 and D (Dead?) is 1.
> 
> HTH,
> 
> Marc Schwartz
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/logistic-regression-using-%22glm%22%2Cwhich-%22y%22-is-set-to-be-%221%22-tf4953617.html#a14185819
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression using "glm",which "y" is set to be "1"

2007-12-05 Thread Bin Yue

 Dear all:
 By comparing glmresult$y and model.response(model.frame(glmresult)),  I
have found out which one is 
set to be "TRUE" and which "FALSE".But it seems that to fit a logistic
regression , logit (or logistic) transformation has to be done before
regression.
 Does anybody know how to obtain the transformation result ? It is hard
to settle down before knowing the actual process R works . I have read some
books and the "?glm" help file , but what they told me was not sufficient.
   Best wishes ,
 Bin Yue


Weiwei Shi wrote:
> 
> Dear Bin:
> you type
> ?glm
> in R console and you will find the Detail section of help file for glm
> 
> i pasted it for you too
> 
> Details
> 
> A typical predictor has the form response ~ terms where response is the
> (numeric) response vector and terms is a series of terms which specifies a
> linear predictor for response. For binomialand quasibinomial families the
> response can also be specified as a
> factor
> (when
> the first level denotes failure and all others success) or as a two-column
> matrix with the columns giving the numbers of successes and failures. A
> terms specification of the form first + second indicates all the terms in
> first together with all the terms in second with duplicates removed. The
> terms in the formula will be re-ordered so that main effects come first,
> followed by the interactions, all second-order, all third-order and so on:
> to avoid this pass a terms object as the formula.
> 
> A specification of the form first:second indicates the the set of terms
> obtained by taking the interactions of all terms in first with all terms
> in
> second. The specification first*second indicates the *cross* of first and
> second. This is the same as first + second + first:second.
> 
> glm.fit is the workhorse function.
> 
> If more than one of etastart, start and mustart is specified, the first in
> the list will be used. It is often advisable to supply starting values for
> a
> quasi
> family,
> and also for families with unusual links such as gaussian("log").
> 
> All of weights, subset, offset, etastart and mustart are evaluated in the
> same way as variables in formula, that is first in data and then in the
> environment of formula.
> 
> 
> 
> On Dec 5, 2007 10:41 PM, Bin Yue <[EMAIL PROTECTED]> wrote:
> 
>>
>> Dear Marc Schwartz:
>>  When I ask R2.6.0 for windows, the information it gives does not contain
>> much about family=binomial .
>>  You said that there is a detail section of "?glm". I want to read it
>> thoroughly. Could  you tell me where and how I can find the detail
>> section
>> of "?glm".
>>   Thank you very much .
>>   Best regards,
>>  Bin Yue
>>
>>
>>
>> Marc Schwartz wrote:
>> >
>> >
>> > On Wed, 2007-12-05 at 18:06 -0800, Bin Yue wrote:
>> >> Dear friends :
>> >> using the "glm" function and setting family=binomial, I got a list
>> of
>> >> coefficients.
>> >> The coefficients reflect the effects  of predicted variables on the
>> >> probability of the response to be "1".
>> >> My response variable consists of  "A" and "D" . I don't know which
>> level
>> >> of
>> >> the response was set to be 1.
>> >> is the first element of the response set to be 1?
>> >>Thank all in advance.
>> >>Regards,
>> >>
>> >> -
>> >> Best regards,
>> >> Bin Yue
>> >
>> >
>> > As per the Details section of ?glm:
>> >
>> > For binomial and quasibinomial families the response can also be
>> > specified as a factor (when the first level denotes failure and all
>> > others success) ...
>> >
>> >
>> > So use:
>> >
>> >   levels(response.variable)
>> >
>> > and that will give you the factor levels, where the first level is 0
>> and
>> > the second level is 1.
>> >
>> > If you work in a typical English based locale with default alpha based
>> > level ordering, it will likely be A (Alive?) is 0 and D (Dead?) is 1.
>> >
>> > HTH,
>> >
>> > Marc Schwartz
>> >
>> > __
>> > R-help@r-project.org mailing list
>> > https://stat.ethz.ch/mailman/listinfo/r-help
>> > PLEASE do read the posting guide
>> > http://www.R-project.org/posting-guide.html
>> > and provide commented, mi

[R] what actually a logistic regression fits

2007-12-06 Thread Bin Yue

Dear all:
I did the following because I was not sure what a logistic regression
fits ."small.glm" is a glm fit , setting the family to be binomial.

>fitted(small.glm)->p
>log(p/(1-p))->left   
> for(i in 1:length(coef(small.glm))){
+  coef(small.glm)[i]*model.matrix(small.glm)[,i]->res[,i]
+ }
> apply(res,1,sum)->right
> plot(left,right)  #I got a straight line whose slope was about 1
Therefore ,I am almost sure that what a logistic regression fits is
log(p/(1-p))~b0+b1*x.
Is that right? 
Regards,

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/what-actually-a-logistic-regression-fits-tf4960618.html#a14207920
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] paradox about the degree of freedom in a logistic regression model

2007-12-06 Thread Bin Yue

 Dear all:
   "predict.glm" provides an example to perform logistic regression when the
response variable is a tow-columned  matrix. I find some paradox about the
degree of freedom  .
 > summary(budworm.lg)

Call:
glm(formula = SF ~ sex * ldose, family = binomial)

Deviance Residuals: 
 Min1QMedian3Q   Max  
-1.39849  -0.32094  -0.07592   0.38220   1.10375  

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -2.9935 0.5527  -5.416 6.09e-08 ***
sexM  0.1750 0.7783   0.2250.822
ldose 0.9060 0.1671   5.422 5.89e-08 ***
sexM:ldose0.3529 0.2700   1.3070.191
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 124.8756  on 11  degrees of freedom
Residual deviance:   4.9937  on  8  degrees of freedom
AIC: 43.104

Number of Fisher Scoring iterations: 4

This is the data set used in regression:
  numdead numalive sex ldose
11   19   M 0
24   16   M 1
39   11   M 2
4   137   M 3
5   182   M 4
6   200   M 5
70   20   F 0
82   18   F 1
96   14   F 2
10  10   10   F 3
11  128   F 4
12  164   F 5

 The degree of freedom is 8. Each row in the example is thought to be
one observation. If  I extend it to be a three column data.frame, the first
denoting the whether the individual is alive , the secode denoting the sex,
and the third "ldose",there will be 12*20=240 observations. 
 Since my data set is one of the second type , I wish to know whether
the form of data set affects the result of regression ,such as the degree of
freedom.
   Dose anybody have any idea about this? Thank all who read this message.
   Regards,
   Bin Yue

-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/paradox-about-the-degree-of-freedom-in-a-logistic-regression-model-tf4960753.html#a14208306
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] the observed "log odds" in logistic regression

2007-12-10 Thread Bin Yue

 
Dear list:
 After reading the following two links:
http://luna.cas.usf.edu/~mbrannic/files/regression/Logistic.html
http://www.tufts.edu/~gdallal/logistic.htm
 I've known the mathematical basis for logistic regression.However I am
still not so sure about the "logit "
 For a categorical independent variable, It is  easy to understand the
procedures  how "log odds" are calculated. As I know, First the observations
are grouped according to the IV and DV, generating a contingency table.The
columns are the levels of IV, and the rows are the levels of DV(0, or 1).For
each column,we get the proprotions  for DV=0 and DV=1 at given IV. Using the
proportions  the log odds can be computed.Is that right?
   My problem  is this : in my data set , the IVs are continuous variables,
do I still have to generate such a table and compute the log odds for each
level of IV according to which the log odds are calculated?  
   In R , fitted(fit) gives the fitted probability for DV to be 1.  Dose the
observed probability exist ? If it does exist , how can I extract it ? If
the IV is cartegorical , the DV can readily changed to be a tow-culumned
matrix, thus log(the observed probabily/(1-the observed probability) might
be the "log odds". I wonder what if the IV is continuous ?
 And about the residuals. It seems that  the residual is not the actual
DV minus the fitted probability. For in my model extreme residuals lie well
beyond (0,1).  I wonder how   the residual is computed.
  Would you please help me ?  Thank all very much again.
Regards,
    Bin Yue


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/the-observed-%22log-odds%22-in-logistic-regression-tp14267125p14267125.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] the observed "log odds" in logistic regression

2007-12-11 Thread Bin Yue

Dieter Menne:
Thank you for your reply!

I know that I don't have to do any logit , but I want to understand how R
fit the glm models.
I will read the examples your suggested .
Best regards,
Bin Yue


Dieter Menne wrote:
> 
> Bin Yue  163.com> writes:
> 
>>  After reading the following two links:
>> http://luna.cas.usf.edu/~mbrannic/files/regression/Logistic.html
>> http://www.tufts.edu/~gdallal/logistic.htm
>>  I've known the mathematical basis for logistic regression.However I
>> am
>> still not so sure about the "logit "
>>  For a categorical independent variable, It is  easy to understand
>> the
>> procedures  how "log odds" are calculated. As I know, First the
>> observations
>> are grouped according to the IV and DV, generating a contingency table.
> ..
>>My problem  is this : in my data set , the IVs are continuous
>> variables,
>> do I still have to generate such a table and compute the log odds for
>> each
>> level of IV according to which the log odds are calculated?  
> 
> Let's assume you are going to use glm in package stats. glm can be fed
> with 
> data in three ways; in your case, you should use the "one-row/one 0-1
> event"
> format, that is the "long" style. You do not have to compute any logit, 
> glm will do that for your.
> 
> The example coming closest to your's is the birthwt example in 
> MASS/scripts/ch07.R  and chapter 7 in Venables/Ripley MASS. Try to
> generate 
> a small, self-running example with a data set similar to your's, and you
> have 
> a good chance to get a more detailed answer.
> 
> Dieter
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/the-observed-%22log-odds%22-in-logistic-regression-tp14267125p14269459.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] re store pictures in an automatically named files using loops

2007-12-17 Thread Bin Yue

Dear all:
 I hope that the file in which the picture is stored has the same name
as the " main" title of the picture . But it turns out that , the name of
the file is :names(spl.sp)[i].bmp,while the main title of the picture is
names(spl.sp)[i+1]
 It seems that the problem is related to the "device". But I am finding it
very difficult to understand .
  Do you have any idea about that?

Here is my function:
 
sp.drawK<-function(spl.sp){
env<-list() 
for(i in 1:length(spl.sp)){
if(spl.sp[[i]]$n==1) next
else{
env[[i]]<-envelope(spl.sp[[i]],Kest)
  plot(env[[i]],main=names(spl.sp)[i])
  legend("topleft",lty=1:4,col=1:4,legend=c("obs","theo","hi","lo"))
  text(2,500,cex=0.8, paste("n=",spl.sp[[i]]$n,sep=""))
  bmp(paste(names(spl.sp)[i],".bmp",sep=""))

  }
}
}

   Best wishes,
  Bin Yue


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/restore--pictures-in-an-automatically-named-files-using-loops-tp14370153p14370153.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] re store pictures in an automatically named files using loops

2007-12-17 Thread Bin Yue

 I put bmp() before the plot() syntax ,
 then the code worked.



Bin Yue wrote:
> 
> Dear all:
>  I hope that the file in which the picture is stored has the same name
> as the " main" title of the picture . But it turns out that , the name of
> the file is :names(spl.sp)[i].bmp,while the main title of the picture is
> names(spl.sp)[i+1]
>  It seems that the problem is related to the "device". But I am finding it
> very difficult to understand .
>   Do you have any idea about that?
> 
> Here is my function:
>  
> sp.drawK<-function(spl.sp){
> env<-list() 
> for(i in 1:length(spl.sp)){
> if(spl.sp[[i]]$n==1) next
> else{
> env[[i]]<-envelope(spl.sp[[i]],Kest)
>   plot(env[[i]],main=names(spl.sp)[i])
>   legend("topleft",lty=1:4,col=1:4,legend=c("obs","theo","hi","lo"))
>   text(2,500,cex=0.8, paste("n=",spl.sp[[i]]$n,sep=""))
>   bmp(paste(names(spl.sp)[i],".bmp",sep=""))
> 
>   }
> }
> }
> 
>Best wishes,
>   Bin Yue
> 
> 


-
Best regards,
Bin Yue

*
student for a Master program in South Botanical Garden , CAS

-- 
View this message in context: 
http://www.nabble.com/restore--pictures-in-an-automatically-named-files-using-loops-tp14370153p14370262.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.