ights and
then use a weighted Cox model.
best regards,
Ronald Geskus
Raja, Dr. Edwin Amalrajwrote:
> Dear Geskus,
>
> I want to develop a prediction model. I followed your paper and analysed
> thro' weighted coxph approach. I can develop nomogram based on the final
> model
/10./j.1541-0420.2010.01420.x, or the vignette
"Multi-state models and competing risks" in the survival package.
best regards,
Ronald Geskus, PhD
head of biostatistics group
Oxford University Clinical Research unit
Ho Chi Minh city, Vietnam
associate professor University of O
Hi folks,
I posted the message below as a new issue on the sparklyr web page at github
over a week ago, but have not gotten any reply back. So I am posting here, in
the hope somebody on this list can provide guidance. I really want to get R
working in Spark on our local Linux cluster. Eager t
.
Ron
Ronald C. Taylor, Ph.D.
Computational Biology & Bioinformatics Group
Pacific Northwest National Laboratory (U.S. Dept of Energy/Battelle)
Richland, WA 99352
phone: (509) 372-6568, email: ronald.tay...@pnnl.gov
web page: http://www.pnnl.gov/science/staff/staff_info.asp?staff_num=7048
---
ome"
[113] "spark_web"
[114] "spark_write_csv"
[115] "spark_write_json"
[116] "spark_write_parquet"
[117] "tbl_cache"
[118] "tbl_uncache"
>
>
>
> spark_install(version = "1.6.2")
Installing Spark 1.6.2
able"
[109] "spark_session"
[110] "spark_uninstall"
[111] "spark_version"
[112] "spark_version_from_home"
[113] "spark_web"
[114] "spark_write_csv"
[115] "spark_write_json"
[116] "spark_write_parquet"
[117] "tbl
made some mistake
specifying the (negative) log-likehood (but I just don't see it). I also
actually don't care much (at the moment) for estimating sigma but I
don't know of a way to specify (and estimate) the (negative)
log-likelihood without estimating sigma.
Hi all,
I am trying to find out how a certain functionality is implemented in R
respectively what a certain found does exactly.
Specifically I am interested in multivariate kernel density estimation.
I found the "ks" package and its "kde" function. Usually, my preferred
way to "look under the hoo
as taught me some useful things about working with R in
Linux: in particular, I learned that one needs to work patiently and
perseveringly through the process of figuring out what dependencies are
impeding the desired installation and how these dependencies can be
For an edx course, MIT's "The Analtics Edge", I need to install the
"caret" package that was originated and is maintained by Dr. Max Kuhn of
Pfizer. So far, every effort I've made to try to
install.packages("caret") has failed. (I'm using R v. 3.1.3 and RStudio
v. 0.98.1103 in LinuxMint 17.1)
Thank you David and Thierry, your answers helped a lot!
Kind regards,
RK.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/postin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dear all,
I have a problem when trying to present the results of several
regression. Say I have run several regressions on a dataset and saved
the different results (as in the mini example below). I then want to
loop over the regression results in ord
FWIW, both Excel 2007 and LibreOffice 4.2 yield the correct variance for
the numbers in Ranjan Maitra's HW problem for incoming students in R.
Namely, both these programs yield a sample variance of 0.28
(rounded to 10 decimal digits).
Ronald Wyllys
On 02/10/2015 05:00 AM, r
(And
I've even given a post-Christmas gift copy of the book to a son-in-law
who uses R in his job in the oil business.)
Ronald Wyllys
Emeritus Professor
The University of Texas at Austin
On 01/11/2015 05:00 AM, r-help-requ...@r-project.org wrote:
Message: 14 Date: Sun, 11 Jan 2015 05:30:5
I have a set of data with ~ 250,000 observations summarized in ~ 1000 rows that
I'm trying to analyze with mlogit. Based on the discussion in
https://stat.ethz.ch/pipermail/r-help/2010-June/241161.html
I understand that using weights= does not (fully) do what I need. I tried
expanding my data t
k out the details.
Thanks a lot anyways for solving my coding problem!
RK
On 22.07.2014 15:26, peter dalgaard wrote:
>
> On 22 Jul 2014, at 06:04 , David Winsemius
> wrote:
>
>>
>> On Jul 21, 2014, at 12:10 PM, Ronald Kölpin wrote:
>>
>>> Dear R-Co
ps[,1])
gaps[,4] <- log(gaps[,2])
nll <- function(mu, sigma)
{
if(sigma >= 0 && mu >= 0)
{
-sum(log(pnorm(gaps[,3], mean=mu, sd=sigma) - pnorm(gaps[,4],
mean=mu, sd=sigma)))
}
else
{
NA
}
}
fit <- mle(nll, start=list(mu=0, sigma=1), nobs=10)
pri
Very cool! Thanks Berend and arun.
R.
On Wed, Oct 9, 2013 at 2:49 PM, Berend Hasselman wrote:
>
> On 09-10-2013, at 13:50, Ronald Peterson wrote:
>
> > Hi,
> >
> > New to R here. Lots of fun. Still rather green though.
> >
> > I'd like to sel
Thanks. That's not quite what I'm looking for, but it's good see different
ways to slice and dice data.
In my example, the one duplicated x,y pair would 9,9, so I would want to
reduce the original list to
> xyz
$x
[1] 8 6 9 0 0 3 9 7 1
$y
[1] 1 2 9 5 1 2 0 9 2
$z
[1] 5 6 9 0 5 1 1 7 3
and if
Hi,
New to R here. Lots of fun. Still rather green though.
I'd like to select unique items from a list that looks like this (for
example):
> xyz
$x
[1] 8 6 9 0 0 3 9 7 1 9
$y
[1] 1 2 9 5 1 2 0 9 2 9
$z
[1] 5 6 9 0 5 1 1 7 3 4
I'd like to select unique (x,y), while preserving association wi
I am new to R and starting to explore its functionality. I wondered if anyone
could advise whether R supports non-linear canonical correlation and/or the
specification of models using alternating least squares?
Thanks
Ron
[[alternative HTML version deleted]]
__
There is a package called 'tempdisagg' on CRAN that offers a similar
functionality.
--
View this message in context:
http://r.789695.n4.nabble.com/temporal-disaggregation-tp3745205p4528583.html
Sent from the R help mailing list archive at Nabble.com.
_
0.813815007235, 344.155248252501,
355.036094643507, 369.731635108215, 413.34522726085, 437.163468053688,
486.081993289409, 511.989800874079, 513.42775947575)
So, I thought I could just use these values instead of the other ones.
but non of my settings worked:
Would it help to use these data?
Thanks
Ronald
49114593, 296.684681943866,
> 318.22059219857, 321.528407765743, 321.528407765743, 328.199413922994,
> 329.256885273328, 336.700014849395, 340.813815007235, 344.155248252501,
> 355.036094643507, 369.731635108215, 413.34522726085, 437.163468053688,
> 486.081993289409, 511.989800874079, 5
on the screen. Say
>>
>>> summary(horton.nlme)
>>>
>>
>> give the output back to console
>>
>>> sink()
>>>
>>
> And one line that is equivalent to those three lines:
>
> capture.output( summary(horton.nlme) , file='your file.txt
I'm trying to export the results of my summary data for the object
horton.nlme, but failing miserably. Running summary(horton.nlme) works
fine, but both write.table and write.csv return the error "cannot coerce
class 'c("summary.lme", "nlme", "lme")' into a data.frame".
I know I can copy and past
a1 <- c(0.2, 100)
opt <- optim(a1, opt.power, method="BFGS", x=xm, y=ym)
but no optimisation of the parameter in a1 takes place.
Any ideas?
--
Ciao
Ronald
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
27 matches
Mail list logo