On Dec 10, 2011, at 10:01 PM, Xiaobo Gu wrote:
Hi,
I saved the following as a UTF-8 encoded file named amberutil.r
as.factor.loop <- function(df, cols){
if (!is.null(df) && !is.null(cols) && length(cols) > 0)
{
for(col in cols)
{
df[
Hi Thierry,
I see what you want now---a significance test for the HR specifically.
See inline below
On Sat, Dec 10, 2011 at 6:40 PM, Thierry Julian Panje
wrote:
> Hi Josh,
>
> Thank you for your quick response!
>
> In several papers the results of a Cox Regression were presented in a table
> s
Hi,
I saved the following as a UTF-8 encoded file named amberutil.r
as.factor.loop <- function(df, cols){
if (!is.null(df) && !is.null(cols) && length(cols) > 0)
{
for(col in cols)
{
df[[col]] <- as.factor(df[[col]])
}
Hi Josh,
Your suggesstion works, and the following also works:
as.factor.loop <- function(df, cols){
if (!is.null(df) && !is.null(cols) && length(cols) > 0)
{
for(col in cols)
{
df[[col]] <- as.factor(df[[col]])
}
Hi Xiaobo,
The problem is that your function is not assigning the results to your
data frame---df is an internatl copy made by the function. This is
done to prevent calling functions to have unexpected events such as
overwriting objects in the global environment. Anyway, I think you
can accompli
I am sorry, it is
for(col in c("x","y")){df[[col]] <- as.factor(df[[col]])}
is.factor(df[["x]])
TRUE
On Sun, Dec 11, 2011 at 10:06 AM, Xiaobo Gu wrote:
> Hi,
>
> I am trying to write a function do cast columns of data frame as
> factor in a loop, the source is :
>
>
> as.factor.loop <- functi
Hi,
I am trying to write a function do cast columns of data frame as
factor in a loop, the source is :
as.factor.loop <- function(df, cols){
if (!is.null(df) && !is.null(cols) && length(cols) > 0)
{
for(col in cols)
{
df[[col]] <- as.f
On Dec 10, 2011, at 6:39 PM, capitantyler wrote:
done it, again, i have the next problem
my traduction:
"The object (list) cannot be corced as "double"
Original:
*km1 <- survfit(Surv(as.numeric(T.201110))~1)*
Error en Surv(as.numeric(T.201110)) :
el objeto (list) no puede ser coercionado a '
done it, again, i have the next problem
my traduction:
"The object (list) cannot be corced as "double"
Original:
*km1 <- survfit(Surv(as.numeric(T.201110))~1)*
Error en Surv(as.numeric(T.201110)) :
el objeto (list) no puede ser coercionado a 'double'
note that need it convert to numeric class
Hi Noah,
I am unclear if the 0s should be standardized or not---I am assuming
since you want them excluded from the calculation of the mean and SD,
you do not want (0 - M) / sigma. If that is the case, here is an
example:
## read in your data
## FYI: providing via dput() would be easier next t
Hi Thierry,
Could you give us an example of what exactly you are doing
(preferablly reproducible R code)? I may be misunderstanding you, but
if you are fitting cox proportional hazard models using the coxph()
function from the surival package, summary(yourmodel) should give the
SE, p-value based
Hi,
I'm having difficulty coming up with a good way to subest some data to generate
statistics.
My data frame has multiple observations by group.
Here is an overly-simplified toy example of the data
==
codev1 v2
G1 1.2 2.3
G1 0
Hi,
I'm new to R and using it for Cox survival analysis. Thanks to this great forum
I learned how to compute the HR with its confidence interval.
My question would be: Is there any way to get the p-value for a hazard ratio in
addition to the confidence interval?
Thanks,
Thierry
--
Thier
... and adding to what has already been said, PCA can be distorted by
non-ellipsoidal distributions or small numbers of unusual values.
Careful (chiefly graphical) examination of results is therefore
essential, and usually fairly easy to do. There are robust/resistant
versions of PCA in R, but they
On Dec 10, 2011 at 5:56pm deb wrote:
> My question is, is there any way I can map the PC1, PC2, PC3 to the
> original conditions,
> so that i can still have a reference to original condition labels after
> PCA?
deb,
To add to what Stephen has said. Best to do read up on principal component
anal
On Dec 10, 2011, at 11:48 AM, tony333 wrote:
X8 = c(0.42808332, 0.14058333, 0.30558333, 0.09558333, 0.01808333,
-0.09191666, -0.11441666, -0.12941666, 0.13808333, -0.31691666,
0.25308333
,-0.20941666 ,0.02808333, -0.04441667, -0.43691666)
xy.lm = lm(Y~X8)
z = predict(xy.lm,list(X8=X
X8 = c(0.42808332, 0.14058333, 0.30558333, 0.09558333, 0.01808333,
-0.09191666, -0.11441666, -0.12941666, 0.13808333, -0.31691666, 0.25308333
,-0.20941666 ,0.02808333, -0.04441667, -0.43691666)
xy.lm = lm(Y~X8)
z = predict(xy.lm,list(X8=X8))
sz = coef(xy.lm)[1]+(coef(xy.lm)[2])*X8
is th
X8 = c(0.42808332, 0.14058333, 0.30558333, 0.09558333, 0.01808333,
-0.09191666, -0.11441666, -0.12941666, 0.13808333, -0.31691666, 0.25308333
,-0.20941666 ,0.02808333, -0.04441667, -0.43691666)
Y =c(370.6 , 887.6 ,3610.88333 , 435.1 , 1261.38333 ,
-741.11667,-3231.36667 ,-
regsubsets in leaps
glmnet has lasso etc. too
On Sat 10 Dec 2011 09:54:16 AM CST, JeffND wrote:
So your question is about fitting a regression model for all the subsets of
predictors? Then there would be
2^13 submodesl?
Probably leaps() does what you want. This function does a all-subset
regre
By doing PCA you are trying to find a lower dimensional representation
of the major variation structure in your data. You get PC* to represent
the "new" data. If you want to know what loads on the axes then you
need to look at the loadings. These are the link between the original
data and th
Hi:
I have a large dataset mydata, of 1000 rows and 1000 columns. The rows
have gene names and columns have condition names (cond1, cond2, cond3,
etc).
mydata<- read.table(file="c:/file1.mtx", header=TRUE, sep="")
I applied PCA as follows:
data_after_pca<- prcomp(mydata, retx=TRUE, center=TRUE,
So your question is about fitting a regression model for all the subsets of
predictors? Then there would be
2^13 submodesl?
Probably leaps() does what you want. This function does a all-subset
regresion.
--
View this message in context:
http://r.789695.n4.nabble.com/Regression-Models-tp417327
Frank,
Have you tried the R function overlay(), it is exactly applying to your
question.
Jeff
--
View this message in context:
http://r.789695.n4.nabble.com/Overlaying-density-plot-on-forest-plot-tp4179654p4180430.html
Sent from the R help mailing list archive at Nabble.com.
__
On 09.12.2011 14:15, Johannes Radinger wrote:
Hi,
I am trying to get write my first own package.
I followed the instructions in "Creating R Packages: A Tutorial"
and "Writing R Extensions". So far everything works really
fine, the script works and even the man-pages don't show
any problems dur
JeffND nd.edu> writes:
>
> Hi folks,
>
> I am having a question about efficiently finding the integrals of a list of
> functions.
We had the same discussion last month under the heading "performance of
adaptIntegrate vs. integrate", see
https://stat.ethz.ch/pipermail/r-help/2011-November
This is not a statistical help list. Post to stats.stackexchange.com
or a similar list to have someone tutor you in regression and explain
why what you are doing is likely to produce utter nonsense.
Or, better yet, get local statistical help in person. You also clearly
need to do some reading on y
On Dec 10, 2011, at 9:13 AM, R. Michael Weylandt wrote:
Perhaps something like this (untested) -- it's going to depend on the
exact structure of your data so if this doesn't work, please use
dput() to send a plain text representation:
tapply(data, data$animal, function(d) d[, c("A01", "A02")]
Perhaps something like this (untested) -- it's going to depend on the
exact structure of your data so if this doesn't work, please use
dput() to send a plain text representation:
tapply(data, data$animal, function(d) d[, c("A01", "A02")] - d[d$time
== "d0", c("A01", "A02")] )
In short, take "data
Hello,
I have a linear model with 10 regressors, and summary() gives me
F-statistics. After deleting a term with smallest p-value the new
model was obtained with F-statistics higher then it was for full
model. Does this mean that parsimonius model is better? The
distributions of dependent variable
Hello again Duncan,
I am sorry it took me two days to get back to your response.
Regarding reshape -
It is well presented here: http://www.jstatsoft.org/v21/i12
And there is also more information about it here: http://had.co.nz/reshape/
After playing around with it, I wrote a small "bridge" betwe
On 10/12/2011 03:54, Benjamin Tyner wrote:
Hi
Given a "cluster" of identical windows machines all running the same
version of R, are there any circumstances where using install.packages()
to install a package (say, one that doesn't have any dependencies) on
just a single machine, then copying th
Dear R User,
Please, I am new to R. I want to overlay density plot for predictive interval
pooled result in meta-analysis.
http://addictedtor.free.fr/graphiques/graphcode.php?graph=114
Regards
Frank Peter
__
R-help@r-project.org mailing list
https
I have a dataset like this:
sites years Var1 Var2
1 1960 505 3.013833
1 1961 533 4.118784
1 1962 609 14.96386
1 1963 465 -3.74409
1 1964 837 41.70164
1 1965 727 29.53478
2 1960 493 3.269235
2 1961 535 5.386015
2 1962 608 16.26244
Hi folks,
I am having a question about efficiently finding the integrals of a list of
functions. To be specific,
here is a simple example showing my question.
Suppose we have a function f defined by
f<-function(x,y,z) c(x,y^2,z^3)
Thus, f is actually corresponding to three uni-dimensional func
I have a matched-case control dataset that I'm using conditional
logistic regression (clogit in survival) to analyze. I'm trying to
conduct k-folds cross validation on my top models but all of the
packages I can find (CVbinary in DAAG, KVX) won't work with clogit
models. Is there any easy way to
Thank you for your response. In fact, it seems not to be adapted to the
Poisson model... :
library(evd)
uvdata <- rgpd(100, loc = 0, scale = 1.1, shape = 0.2)
M1 <- fpot(uvdata, 1, model="pp")
library(SpatialExtremes)
profile(M1)
--
View this message in context:
http://r.789695.n4.nabble.com/pr
36 matches
Mail list logo