Hi Uwe,
My apologies.
Please if I can be guided what I am doing wrong in the code. I started my
code as such:
#ypred is my leave one out estimator of x
cvhfunc<-function(y,x,h){
ypred<-0
for (i in 1:n){
for (j in 1:n){
if (j!=i){
ypred<-ypred+(y[i]*k((x[j]-x[i])/h))/k((x[j]-x[i])/h)
}
}}
ypred
Hello:
So i've fit a hazard function to a set of data using
kmfit<-survfit(Surv(int, event)~factor(cohort))
this factor variable, "cohort" has four levels so naturally the strata
variable has 4 values.
I can use this data to estimate the hazard rate
haz<-n.event/n.risk
and calculate the cumul
Hello,
I have been using read.table to read data files into R, then rbind to
combine them into one table. The column headings are all identical, so it
should be straightforward, and it seems to be working well so far.
My question is: What is the maximum number of tables that can be combined
wit
Ross Culloch wrote:
Hello fellow R users!
I wonder if someone can help with what i think should be a simple question
but i can't seem to find the answer or work it out.
My data set is as such:
Day Time ID Behaviour
1 9 A1 2
1 10A2 3
.. .... ..
4 10
On Jun 19, 2009, at 7:45 PM, muddz wrote:
Hi Uwe,
My apologies.
Please if I can be guided what I am doing wrong in the code. I
started my
code as such:
# ypred is my leave one out estimator of x
Estimator of x? Really?
cvhfunc<-function(y,x,h){
ypred<-0
for (i in 1:n){
Hi,
I'm trying to modify this function.I want to remove the existing xaxis(the
tick marks and the values below each tick) and make it dynamic so that i can
choose whether i want the xaxis at the top or bottom but i cant seem to
change that.can somebody help me?
plot.Sample <- function(x,xlab=NULL,
On Jun 20, 2009, at 12:23 AM, j0645073 wrote:
Hello:
So i've fit a hazard function to a set of data using
kmfit<-survfit(Surv(int, event)~factor(cohort))
this factor variable, "cohort" has four levels so naturally the strata
variable has 4 values.
I can use this data to estimate the hazard
On Jun 20, 2009, at 1:02 AM, gug wrote:
Hello,
I have been using read.table to read data files into R, then rbind to
combine them into one table. The column headings are all identical,
so it
should be straightforward, and it seems to be working well so far.
My question is: What is the ma
On Jun 20, 2009, at 7:05 AM, rajesh j wrote:
Hi,
I'm trying to modify this function.I want to remove the existing
xaxis(the
tick marks and the values below each tick) and make it dynamic so
that i can
choose whether i want the xaxis at the top or bottom but i cant seem
to
change that.can
Dear R-users,
I'd like to announce the release of the new version of package JM (soon
available from CRAN) for the joint modelling of longitudinal and
time-to-event data using shared parameter models. These models are
applicable in mainly two settings. First, when focus is in the
time-to-even
Actually there aren't by now so many packages for markov regime switching
models in R but I just saw that at the userR 2009 conference a talk will be
held about it, maybe that can help you, though the authors don't mention
whether they use a existing package or developed new functionalities.
ht
Dear R users,
unfortunately, tomorrow Sunday will be a longish timeout
of our mail services, notably affecting the
R- and R-SIG- mailing lists @r-project.org , i.e.,
hosted by the Stats / Math Department of ETH, stat.math.ethz.ch.
Note that also the svn.r-project.org will suffer from the
timeou
Thanks David - you're right, "PC" is not very informative. I am using XP
Home with 4GB, though I don't think XP accesses more than 3GB.
>From following through the FAQ's and memory functions (e.g.
"memory.limit(size = NA)") it looks like R is getting about 1535Mb at the
moment.
David Winsemius
Hi List
I have data in the following form:
Gene TFBS
NUDC PPARA(1) HNF4(20) HNF4(96) AHRARNT(104) CACBINDINGPROTEIN(149)
T3R(167) HLF(191)
RPA2 STAT4(57) HEB(251)
TAF12 PAX3(53) YY1(92) BRCA(99) GLI(101)
EIF3I NERF(10) P300(10)
TRAPPC3 HIC1(3) PAX5(17) PAX5(110) NRF1(1
You may be able to get more space than what you currently have, but my
understanding is the XP in a default configuration can only address
2.5 GB. You also need to consider that you will need open memory in
order to do anything useful. I am not aware of limitations imposed by
rbind per se.
Here's what I get
> head(tt)
[1] "2008-02-20 03:09:51 EST" "2008-02-20 12:12:57 EST"
[3] "2008-03-05 09:11:28 EST" "2008-03-05 17:59:40 EST"
[5] "2008-03-09 09:00:09 EDT" "2008-03-29 15:57:16 EDT"
But I can't figure out how to plot this now. plot(tt) does not appear to be
univariate. I get the sam
Dear All,
I have a data set with the following structure:
[A], [a], [B], [b]
where [A] and [B] are measurements and [a] and [b] are the associated
uncertainties. I produce [B]/[A] vs. [A] plots in R and would like to
show uncertainties as error ellipses (rather than error bars). Would
th
Seunghee Baek wrote:
Hi,
I have a problem in programming for bootstrapping.
I don't know why it show the error message.
Please see my code below:
#'st' is my original dataset.
#functions of 'fml.mlogl','pcopula.fam4','ltd','invltd' are already defined
boot.OR<-function(data,i)
{
E=data[i,]
m
Try this. We read in data and split TFBS on "(" or ") " or ")"
giving s and reform s into a matrix prepending the Gene name as
column 1. Convert that to a data frame and make the third
column numeric.
Lines <- "Gene,TFBS
NUDC,PPARA(1) HNF4(20) HNF4(96) AHRARNT(104) CACBINDINGPROTEIN(149)
T3R(16
Try this:
plot(seq_along(tt), tt)
On Sat, Jun 20, 2009 at 10:55 AM, Thomas Levine wrote:
> Here's what I get
>> head(tt)
> [1] "2008-02-20 03:09:51 EST" "2008-02-20 12:12:57 EST"
> [3] "2008-03-05 09:11:28 EST" "2008-03-05 17:59:40 EST"
> [5] "2008-03-09 09:00:09 EDT" "2008-03-29 15:57:16 EDT"
>
Change as in:
plot.Sample <- function(..., xaxloc = 1){
and in the body replace "axis(1)" by "axis(xaxloc)"
then you can call plot(., xaxloc=3) in order to plot the xaxis at
the top (while bottom is still the default).
Uwe Ligges
David Winsemius wrote:
On Jun 20, 2009, a
This produces the x-axis is the index, and the y-axis is time. It has all of
the time information on the same axis, allowing me to plot cumulative
occurrences by time (my original plan) if the times are sorted, which they
should be.
I think I'll end up using some variant of plot(tt,seq_along(tt)),
Please report problems in the code to the package maintainer (CCing).
Best,
Uwe Ligges
Mikhail Titov wrote:
Hello!
I tried to contact author of the package, but I got no reply. That is why I
write it here. This might be useful for those who were using cts for spectral
analysis of non-unifo
If that is the situation then plot(tt) in your post could not have been
what you wanted in any case, e.g. plot(10:20)
On Sat, Jun 20, 2009 at 11:49 AM, Thomas Levine wrote:
> This produces the x-axis is the index, and the y-axis is time. It has all of
> the time information on the same axis, allow
also use 'try' to capture the error and continue
On Sat, Jun 20, 2009 at 12:57 AM, Erin Hodgess wrote:
> Dear R People:
>
> I'm loading several thousand .Rdata files in sequence.
>
> If one of them is empty, the function crashes.
>
> I am thinking about using system(wc ) etc., and strsplit for th
Hi Sean,
The levels attribute of a factor can contain levels that are not
represented in the data. So, in your example we can get the desired
result by adding the missing levels via the levels argument to the
factor function:
> dfB =data.frame(f1=factor(c('a','b','b'), levels=c('a','b','c')),
>
Hallo Sebastian,
> "SP" == Sebastian Pölsterl
> on Sun, 14 Jun 2009 14:04:52 +0200 writes:
SP> Hello Martin,
SP> I plotting the silhouette of a clustering and storing it as png. When I
SP> try to store the image as png the bars are missing. The bars are
plotted
SP>
Hi David,
Thanks and I apologize for the lack of clarity.
##n is defined as the length of xdat
n<-length(xdat)
#I defined 'k' as the Gaussian kernel function
k<-function(v) {1/sqrt(2*pi)*exp(-v^2/2)} #GAUSSIAN kernal
#I believe ypred in my case, was the leave one out estimator (I think its
the
Dear colleagues in R,
I am using rpart to make treedecision, but what treedecision rpart use ?
j48, id3, c4.5 ?
Tanks
Rafael Marconi Ramos
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/
Hello,
I'm plotting an xyplot where a continuous var recorded every min is plotted
on y, and time expressed as HH:MM:SS on x, as follows :
xaxis=list(tick.number=12,rot=90)
lst=list(x=xaxis)
xyplot(upt$LOAD_1 ~ upt$TIME, data=upt, type=c('g','p', 'r'), scales=lst)
On the x-axis, every time labe
On 2009.06.19 14:04:59, Michael wrote:
> Hi all,
>
> In a data-frame, I have two columns of data that are categorical.
>
> How do I form some sort of measure of correlation between these two columns?
>
> For numerical data, I just need to regress one to the other, or do
> some pairs plot.
>
> B
(I am replacing R-devel and r-bugs with r-help as addressees.)
On Sat, Jun 20, 2009 at 9:45 AM, Dr. D. P. Kreil wrote:
> So if I request a calculation of "0.3-0.1-0.1-0.1" and I do not get 0,
> that is not an issue of rounding / underflow (or whatever the correct
> technical term would be for th
You don't seem to be making any corrections or updating your code.
There remains a syntax error in the last line of cvhfunc because of
mismatched parens.
On Jun 20, 2009, at 1:04 PM, muddz wrote:
Hi David,
Thanks and I apologize for the lack of clarity.
#n is defined as the length of xda
On Sat, Jun 20, 2009 at 4:10 PM, Dr. D. P. Kreil wrote:
> Ah, that's probably where I went wrong. I thought R would take the
> "0.1", the "0.3", the "3", convert them to extended precision binary
> representations, do its calculations, an the reduction to normal
> double precision binary floats w
On Jun 20, 2009, at 2:05 PM, Jason Morgan wrote:
On 2009.06.19 14:04:59, Michael wrote:
Hi all,
In a data-frame, I have two columns of data that are categorical.
How do I form some sort of measure of correlation between these two
columns?
For numerical data, I just need to regress one to
Dear Stavros,
Thank you very much for your helpful email and your patience.
> Perhaps you are thinking about the case where intermediate results are
> accumulated in higher-than-normal precision. This technique only applies in
> very specialized circumstances, and it not available to user code in
Yes, I see the problem now! Thank you for bearing with me and for the
helpful explanations and info.
Best regards,
David.
2009/6/20 Stavros Macrakis :
> On Sat, Jun 20, 2009 at 4:10 PM, Dr. D. P. Kreil wrote:
>>
>> Ah, that's probably where I went wrong. I thought R would take the
>> "0.1", the
Hi,
I am trying to install package xts. I am using OPENSUSE 11 64 bit. When I
invoke:
install.pacakges("xts",dependencies=T)
I received a lot of "ERROR: compilation failed for package" errors when R
tries to install xts dependencies. I do not understand this behavior.
I notice one thing. It com
Hi,
I have been using R for a while. Recently, I have begun converting my
package into S4 classes. I was previously using Rdoc for documentation.
Now, I am looking to use the best tool for S4 documentation. It seems that
the best choices for me are Roxygen and Sweave (I am fine with tex).
Ar
On Saturday 20 June 2009 04:36:55 pm Marc Schwartz wrote:
> On Jun 20, 2009, at 2:05 PM, Jason Morgan wrote:
> > On 2009.06.19 14:04:59, Michael wrote:
> >> Hi all,
> >>
> >> In a data-frame, I have two columns of data that are categorical.
> >>
> >> How do I form some sort of measure of correlatio
40 matches
Mail list logo