For your second question.. Why not make a function on your own using
if(),is.vector(), is.matrix(),is.dataframe() and club it to return different
values accordingly?
Regards
VP
Sent from my BlackBerry® smartphone from !DEA
-Original Message-
From: Chintanu
Date: Mon, 15 Aug 2011 15:03:4
On Sun, Aug 14, 2011 at 10:03 PM, Chintanu wrote:
> Hello Josh,
>
> Thank you - that worked. Also, thanks to VIjayan Padmanabhan for your
> effort.
You are quite welcome.
>
> Further, please allow me to ask 2 quick questions:
>
> 1. The default "cor" takes Pearson correlation. How would I change
Hello Josh,
Thank you - that worked. Also, thanks to VIjayan Padmanabhan for your
effort.
Further, please allow me to ask 2 quick questions:
1. The default "cor" takes Pearson correlation. How would I change it to,
say *Spearman* correlation ? Something like the following doesn't work.
> apply(
On Sun, Aug 14, 2011 at 8:41 PM, Chintanu wrote:
> Hello Joshua,
>
> I could feel that my explanation was bad so far. Now, giving another effort
> here to simplify things :
>
> I have a dataframe ("file") containing 8 samples (in columns). Those
> samples' results (numericals) are available in the
Is your column dimension of file(,3:10) 9?
Your dim(LGD) is 8
That could be the problem.
Regards
VIjayan Padmanabhan
Sent from my BlackBerry® smartphone from !DEA
-Original Message-
From: Chintanu
Sender: r-help-bounces@r-project.orgDate: Mon, 15 Aug 2011 13:41:13
To: Joshua Wiley
Cc:
Hello Joshua,
I could feel that my explanation was bad so far. Now, giving another effort
here to simplify things :
I have a dataframe ("file") containing 8 samples (in columns). Those
samples' results (numericals) are available in the dataframe's rows.
LGD is another vector.
LGD <- c(11.6, 12.
I am new to R. Have a naive question about the boa package.
I loaded the package as instructed, and after entering
R > boa.menu()
I could never got the ready prompt (R hung).
Can someone help? Thanks.
__
R-help@r-project.org mailing list
https://stat.
On Sun, Aug 14, 2011 at 7:55 PM, Darius H wrote:
>
>
>
>
>
>
>
>
>
The above whitespace comes about from posting to this list (which
converts emails to plaintext) in HTML. Please set future emails to
plain text (its under text formatting or something like that in
hotmail).
> Hi everyone,
>
> Do
On Sun, Aug 14, 2011 at 7:21 PM, Chintanu wrote:
> Hi Joshua,
>
> SORRY for not making that clear. I wish to have the correlation values
> between each column of my "file" with the "LGD". For example:
>
> cor (Column 1, LGD)
> cor (column 2, LGD) ... so on.
Okay, you need to make a tractable exam
Hi everyone,
Does anyone know how I can use the predict() function or anything similar in a
various packages to forecast future values of a system of equations in a list?
I keep getting an error message when I try to use the predict function and I
cannot find anything on the help arc
Hi Joshua,
SORRY for not making that clear. I wish to have the correlation values
between each column of my "file" with the "LGD". For example:
cor (Column 1, LGD)
cor (column 2, LGD) ... so on.
The first one you have provided is producing an error :
> sapply(file[1:47231, 3:10], FUN = cor, y =
Hi Chintanu,
Do you want the correlation of columns 3:10 of file with the y vector
or do you want a correlation matrix of all variables?
## correlation between cols 3:10 and y
sapply(file[1:47231, 3:10], FUN = cor, y = rep(LGD, 47231), method = "pearson")
## correlation matrix
cor(cbind(file[1:4
Hi,
I am not sure how to fix the following error.
LGD <- c(11.6, 12.3, 15.8, 33.1, 43.5, 51.3,
67.3, 84.9)
cor (x=(file [1:47231,3:10]), y= rep (LGD, 47231), method = "pearson")
Error in cor(x = (file[1:47231, 3:10]), y = rep(LGD, 47231), method =
"pearson") :
inc
Perhaps, we should endeavor not to provide any kind of help for
such homework problems, especially to untraceable e-mail addresses
(such as gmail, hotmail, yahoo and the like)? After all, conceptualizing
a solution before writing R code is very much a part of the learning
process, no?
Ranjan
On S
If you are going use 'numeric' date values further in numeric
calculations sometimes it is advisable to convert to numeric
> x1 <- as.POSIXct(c('2011-08-11 12:00', '2011-08-14 15:15'))
> difftime(x1[2], x1[1], units = 'days')
Time difference of 3.135417 days
> as.numeric(difftime(x1[2], x1[1], u
Hi:
Here's another approach using the reshape2 package. I called your data
frame dat in the code below.
library('reshape2')
mdat <- melt(dat, measure = c('y', 'f'))
acast(mdat, e1 ~ variable ~ e2, fun = sum, margins = 'e1')
, , con
y f
can21 108
france 21 114
italy 21 126
usa
You were using difftime incorrectly; look at the help page.
> x1 <- as.POSIXct(c('2011-08-11 12:00', '2011-08-14 15:15'))
> x1
[1] "2011-08-11 12:00:00 EDT" "2011-08-14 15:15:00 EDT"
> ceiling(difftime(x1[2], x1[1], units = 'days'))
Time difference of 4 days
>
On Sun, Aug 14, 2011 at 9:33 AM, Ji
Hi:
This definitely sounds like a homework problem but it's very easy to
do in R if you think about it the right way.
(1) Generate a matrix of random numbers. This is easier than it looks,
since you can generate a long vector of random numbers and then
reshape it into a matrix. Typically, a row c
This is a difficult task. If Xand Yare bivariate normal then the regression
E[Y|X] is linear. If they are bivariate alpha stable the regression is non
linear. Have a look at the material on multivariate alph stable
distributions in Uchaikin and Zolotarev (1999), Chance and Stability, VSP.
There a
Hi eric,
Try
lapply(with(x, split(x, e2)), function(l){
r <- with(l, aggregate(list(y, f), list(e1), sum))
colnames(r) <- c('e1', 'y', 'f')
r
})
HTH,
Jorge
On Sun, Aug 14, 2011 at 1:20 PM, eric <> wrote:
> I have a data frame called test shown below that i would like to summarize
> in
> a
I hope this will help you get going
b <- sapply(unique(test$e2), function(x) {
out <- aggregate(cbind(y,f)~e1, subset(test, e2==x),"sum")
out <- rbind(out, data.frame(e1="total", y=sum(out$y), f=sum(out$f)))
out <- list(out)
names(out) <- x
out
})
> b
$std
e1 y f
1
Hey guys,
I am new to R and apologize for the basic question - I do not mean to
offend.
I have been using R to perform PCA on a set several hundred objects using a
set of 30 descriptors. From the results generated by prcomp(), is there a
way to print a matrix showing the contributions of the orig
Hi:
Use replace():
replace(initial, initial < 5, 0)
sample1 sample2 sample3
1900 0 8 0
1901 5 6 5
1902 0 0 0
1903 8 0 0
1904 0 7 0
1905 0 5 6
replace(initial, initial >= 5, 0)
Thanks, but
1) as input for the sample size estimation ony an AUC is given - and the output
of the study should be an AUC, too. So I thought this should be the right way.
2) I read e.g. in PASS they are doing a sample size calculation for AUC. Are
thesy wrong?
Sorry for asking further more but
Berend Hasselman wrote:
>
>
> B. Jonathan B. Jonathan wrote:
>>
>> Hi there, I have following equations to be solved for a and b:
>>
>> a/(a+b) = x1
>> ab/((a+b)^2 (a+b+1)) = x2
>>
>> Is there any direct function available to solve them without
>> disentangling them manually? Thanks for your
B. Jonathan B. Jonathan wrote:
>
> Hi there, I have following equations to be solved for a and b:
>
> a/(a+b) = x1
> ab/((a+b)^2 (a+b+1)) = x2
>
> Is there any direct function available to solve them without
> disentangling them manually? Thanks for your help.
>
There is a package nleqslv tha
I have a data frame called test shown below that i would like to summarize in
a particular way :
I want to show the column sums (columns y ,f) grouped by country (column
e1). However, I'm looking for the data to be split according to column e2.
In other words, two tables of sum by country. One tab
Hi there, I have following equations to be solved for a and b:
a/(a+b) = x1
ab/((a+b)^2 (a+b+1)) = x2
Is there any direct function available to solve them without
disentangling them manually? Thanks for your help.
__
R-help@r-project.org mailing list
h
Use Date instead of POSIXct. Since your data is already POSIXct you need to
convert to character and then Date before you use difftime.
> a <- as.POSIXct("2011-08-02 00:01:00")
> b <- as.POSIXct("2011-08-01 23:59:00")
>
as.Date(as.character(a,format="%Y-%m-%d"))-as.Date(as.character(b,format="%
Hi
On 8/13/2011 9:49 AM, Prof Brian Ripley wrote:
I think you are doing this in the wrong order. You need to set the gpar
on the viewport, then compute the grid.rect.
grid.rect(width=unit(1,'strwidth','Some text'),draw=T, gp=gpar(font=2))
grid.text('Some text',y=0.4,gp=gpar(font=2),draw=T)
i
Hi all, does anyone know of a function that would calculate AIC or AICc for a
system of equations. I have several systems and I have individual AIC values
but I need a global one to assess amongst several systems of equations. The
systems are very similar to, but not exactly like VAR groups of
Here's an example of relevel used to relevel and combine groups
InsectSprays2<-InsectSprays
levels(InsectSprays2$spray)
levels(InsectSprays2$spray)<-list(new1=c("A","C"),YEPS=c("B","D","E"),LASTLY="F")
levels(InsectSprays2$spray)
InsectSprays2
So for you try...
levels (Data1$Site) <- list(F
I believe the easiest way to get something like the provided graph is to do
a line plot based on cut/factor frequency, for example:
x = rnorm(500);
x.F = cut(x,c(-Inf,-2,-1,0,1,2,Inf))
plot(table(x.F),type="l")
There are of course many variations on how you can set this up, but if you
are familia
Dear Helplist:
I am trying, unsuccessfully, to rename levels of a factor in a dataframe. The
dataframe consists of two factor variables and one numeric variable as follows:
Factor Site has 2 levels AB and DE, factor Fish has 30 levels, 15 associated
with each Site e.g. 1-1, 1-2,.2-1, 2
Hello all!!!
I want to measure the duration of events (given a start and an end time).
The catch is that I require the output in calender days. This means:
02-Jan-2011 00:01:00 minus 01-Jan-2011 23:59:00 should be 1 day (although
the real time difference is only 2 minutes)
My data is the follo
I am using the function optim and I get the error message
ABNORMAL_TERMINATION_IN_LNSRCH. Reason for this could be a scaling problem.
Thus, I used parscale in order to scale the parameters. But I still have the
error message. For example, with parscale=c(rep(1,n), 0.01,1,0.01):
return(optim(c(
Hello !
I'm already using "fBasics" to generate alpha-stable variables or compute their
density or distribution function but
do you know where I could find .R tools for computing the correlation and fit a
regression between two
alpha-stable variables ?
Thanks in advance !
Kind regards,
Pascal
Thanks very much. The cut function is exactly what I was looking for. For the
graph I forgot to include an example (picture attached). I think it is
something different from what you have shown in the examples. I want to plot
all the data in a line plot - exactly how it is shown in the attached
Please read the posting guide.. this is not a homework help list. Thus, further
discussion of this thread is off topic here. Perhaps you can correspond offlist.
---
Jeff Newmiller The . . Go Live...
DCN: Basics: ##.#.
Hi Maggy,
Sorry, I cannot help you. But as I am pretty new to R, I am interested
in this assigment. May I ask you to post it?
z
On Sun, Aug 14, 2011 at 3:16 PM, maggy yan wrote:
> my data looks like this:
>
> PM10 Ref UZ JZ WT RH FT WR
> 1 10.973195 4.33887
This email, or rather one that includes a reproducible example, belongs in the
email box of the author of the library, as identified in the library help file.
---
Jeff Newmiller The . . Go Live...
DCN: Basics: ##.#. ##
At 17:01 11/08/2011, Kathie wrote:
almost forgot. In fact, I want to generate correlated Poisson random vectors.
Kathie
Typing generate random poisson into the R Site Search gives a number
of hits. Does any of them do what you require?
Thank you anyway
--
View this message in context:
htt
Actually
sapply(x %% 1, function(x) isTRUE(all.equal(x, 0)))
seems to be the way to go.
Uwe Ligges
On 14.08.2011 07:17, Ken wrote:
How about something like:
If(round(x)!=x){zap} not exactly working code but might help
Ken
On Aug 13, 2554 BE, at 3:42 PM, Paul Johnson wrote:
A client
my data looks like this:
PM10 Ref UZ JZ WT RH FT WR
1 10.973195 4.338874 nein Winter Dienstag ja nein West
26.381684 2.250446 nein SommerSonntag nein ja Süd
3 62.586512 66.304869 ja SommerSonntag nein nein Ost
45.590101 8.526152
Dear list,
perusing the GMane archive shows that the issue with XML 3.4.x still bugs
odfWeave users.
I just checked that the newer XML 3.4.2 version still give the same
problem. Using it to weave a bit of documentation written with LibreOffice
3.3.3 (current in Debian testing) leads me to a 19
Eric:
Create another column using grep and regular expression of your choice,
then subset based on that column.
Jorge:
OP wants inexact match.
P.S. I'd use RDBMS and SQL to pull data of interest
Mikhail
On 08/14/2011 02:20 AM, Jorge Ivan Velez wrote:
> Hi eric,
>
> See
>
> R> ?"%in%"
>
> and
Hi,
Try
ifelse(initial < 5, initial, 0)
ifelse(initial >= 5, initial, 0)
and take a look at ?ifelse
HTH,
Jorge
On Sat, Aug 13, 2011 at 9:02 PM, andrewjt <> wrote:
> This is what I am starting with:
>
> initial<- matrix(c(1,5,4,8,4,4,8,6,4,2,7,5,4,5,3,2,4,6), nrow=6,
> ncol=3,dimnames=list(c(
Perhaps this:
matches = grep("^ibm|sears|exxon", zeespan$customer, value=F)
zee = zeespan[matches,]
t
On Aug 14, 2011, at 12:44 AM, eric wrote:
> I have a dataframe zeespan. One of the columns has the name "customer". The
> data in the customer column is text. I would like to return a subset of
Hi eric,
See
R> ?"%in%"
and try the following (untested):
subset(zeespan, !customer %in% c("ibm" , "exxon" , "sears") )
HTH,
Jorge
On Sat, Aug 13, 2011 at 7:44 PM, eric <> wrote:
> I have a dataframe zeespan. One of the columns has the name "customer". The
> data in the customer column is
This is what I am starting with:
initial<- matrix(c(1,5,4,8,4,4,8,6,4,2,7,5,4,5,3,2,4,6), nrow=6,
ncol=3,dimnames=list(c("1900","1901","1902","1903","1904","1905"),
c("sample1","sample2","sample3")))
And I need to apply a filter (in this case, any value <5) to give me one
dataframe with only the
I have a dataframe zeespan. One of the columns has the name "customer". The
data in the customer column is text. I would like to return a subset of the
dataframe with all rows that DON'T begin with either "ibm" or "exxon", or
"sears" in the customer column.
I tried subset(zeespan, customer !
Thank you guys!
--
View this message in context:
http://r.789695.n4.nabble.com/degrees-of-freedom-does-not-appear-in-the-summary-lmer-tp3741327p3742170.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
h
Hi Mark,
I am interested in doing what you are asking about. Did you figure out an
easy way to do this. I am interested in performing many two and more
factor anova on a dataframe. I am still a little new to R but I know enough
to get me this far.
thanks
Don
--
View this message in contex
53 matches
Mail list logo