Re: [R] comparing two "half-normal production" stochastic frontier functions

2014-10-03 Thread Arne Henningsen
Dear Rainer On 3 October 2014 14:51, Rainer M Krug wrote: > I am using the function frontier::sfa (from the package frontier) to > estimate several "half-normal production" stochastic frontier functions. > > Now I want to compare the coefficients of the linear frontier function > and see if they

Re: [R] a REALLY dumb question

2014-10-03 Thread Erin Hodgess
Wow! Never thought of trace! (obviously) thanks! On Fri, Oct 3, 2014 at 3:03 PM, Greg Snow <538...@gmail.com> wrote: > Instead of making a local copy and editing, you may consider using the > trace function with edit=TRUE, this allows you to insert things like > print statements, but takes care

Re: [R] Workaround for RODBC asymmetric numeric data treatment

2014-10-03 Thread Prof Brian Ripley
On 03/10/2014 20:18, Rui Barradas wrote: Hello, Inline Em 03-10-2014 19:04, Bos, Roger escreveu: Andrew, I ran your code using my SQL Server database and it seems like it worked okay for me, in that I end up with "num" data types when I read the data back in. So it may be a setting on your d

Re: [R] Workaround for RODBC asymmetric numeric data treatment

2014-10-03 Thread Rui Barradas
Hello, Inline Em 03-10-2014 19:04, Bos, Roger escreveu: Andrew, I ran your code using my SQL Server database and it seems like it worked okay for me, in that I end up with "num" data types when I read the data back in. So it may be a setting on your database. I don't claim to know which on

Re: [R] a REALLY dumb question

2014-10-03 Thread Greg Snow
Instead of making a local copy and editing, you may consider using the trace function with edit=TRUE, this allows you to insert things like print statements, but takes care of the environment and other linkages for you (and is easy to untrace when finished). On Fri, Oct 3, 2014 at 11:12 AM, Erin H

Re: [R] power.t.test threading on 'power'

2014-10-03 Thread Stephen Kennedy
Dear Professor Dalgard, I wondered if I might ask a general question on ‘power’. Please feel free to ignore. For ‘non-inferiority’ clinical trials: H0: m1 - m2 ≤ -M Ha: m1 - m2 > -M But when calculations are done (normal, t, or non-central t … still learning what this is), Ha: m1 - m2 =

Re: [R] Workaround for RODBC asymmetric numeric data treatment

2014-10-03 Thread Bos, Roger
Andrew, I ran your code using my SQL Server database and it seems like it worked okay for me, in that I end up with "num" data types when I read the data back in. So it may be a setting on your database. I don't claim to know which one. BTW, I had to install 5 or 6 separate packages to get fP

Re: [R] Help with PredicABEL

2014-10-03 Thread Peter Langfelder
You are getting a p-value, namely p=0. It's just that, when taken literally, the p-values are wrong. I'm not familiar with predictABEL, but my guess is that the p-value is below 2e-16 or some such cutoff and gets printed as zero (the means seem to be about 10 standard deviations away from zero, wh

[R] Logistic Regression & Results per Condition

2014-10-03 Thread Eve Legrand
Hi everyone! I conducted a study for which I conducted logistic regressions (and it works), but now I'd like to have the results per condition, and I failed to discover how to have them. I explain myself: In conduted a study in which participants can realize one behavior (coded "1" if realize

[R] Help with PredicABEL

2014-10-03 Thread Evan Kransdorf
I am using PredictABEL to do reclassification. When I use it to compare two models (+/- a new marker), I get some output without a p-valve. Anyone know why this might be? #BEGIN R OUTPUT NRI(Categorical) [95% CI]: 0.0206 [ 0.0081 - 0.0332 ] ; p-value: 0.00129 NRI(Continuous) [95% CI]: 0.1781 [

[R] Workaround for RODBC asymmetric numeric data treatment

2014-10-03 Thread Andrew
Note: I did raise report the issue below to r-sig...@r-project.org, but didn't see any reply. I'm hoping somebody on r-help can help me devise a workaround for a problem I'm having with RODB: I use RODBC to read and write a good deal of data to SQL Server and I'd be extremely grateful if

Re: [R] a REALLY dumb question

2014-10-03 Thread Erin Hodgess
thank you!! On Fri, Oct 3, 2014 at 12:18 PM, Duncan Murdoch wrote: > On 03/10/2014 12:09 PM, Erin Hodgess wrote: > >> So please be prepared... >> >> Ok. I made a copy of the arima.r function called earima.r to put in some >> print statements. Fair enough. >> >> Now when I run earima, the .Cal

Re: [R] a REALLY dumb question

2014-10-03 Thread Duncan Murdoch
On 03/10/2014 12:09 PM, Erin Hodgess wrote: So please be prepared... Ok. I made a copy of the arima.r function called earima.r to put in some print statements. Fair enough. Now when I run earima, the .Call statements call find the C subroutines. I know that this should be a really simple fix

Re: [R] c() and dates

2014-10-03 Thread Therneau, Terry M., Ph.D.
Well duh -- type "c.Date" at the command prompt to see what is going on. I suspected I was being dense. Now that the behaior is clear can I follow up on David W's comment that redfining the c.Date function as structure(c(unlist(lapply(list(...), as.Date))), class = "Date") allows for a

[R] a REALLY dumb question

2014-10-03 Thread Erin Hodgess
So please be prepared... Ok. I made a copy of the arima.r function called earima.r to put in some print statements. Fair enough. Now when I run earima, the .Call statements call find the C subroutines. I know that this should be a really simple fix, but I don't know how. I do know that the or

Re: [R] Using PCA to filter a series

2014-10-03 Thread David L Carlson
You can reconstruct the data from the first component. Here's an example using singular value decomposition on the original data matrix: > d <- cbind(d1, d2, d3, d4) > d.svd <- svd(d) > new <- d.svd$u[,1] * d.svd$d[1] new is basically your cp1. If we multiply it by each of the loadings, we can

Re: [R] c() and dates

2014-10-03 Thread David Winsemius
On Oct 3, 2014, at 7:19 AM, Therneau, Terry M., Ph.D. wrote: > I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1) > > > temp1 <- as.Date(1:2, origin="2000/5/3") > > temp1 > [1] "2000-05-04" "2000-05-05" > > > temp2 <- as.POSIXct(temp1) > > temp2 > [1] "2000-05-03 19:00:00 CD

Re: [R] c() and dates

2014-10-03 Thread David Winsemius
On Oct 3, 2014, at 7:19 AM, Therneau, Terry M., Ph.D. wrote: > I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1) > > > temp1 <- as.Date(1:2, origin="2000/5/3") > > temp1 > [1] "2000-05-04" "2000-05-05" > > > temp2 <- as.POSIXct(temp1) > > temp2 > [1] "2000-05-03 19:00:00 CD

Re: [R] merge by time, certain value if 5 min before and after an "event"

2014-10-03 Thread William Dunlap
Hi Terry, Some of that combination of sort() and approx() can be done by findInterval(), which may be quick enough that you don't need the 'thinning' part of the code. Bill Dunlap TIBCO Software wdunlap tibco.com On Fri, Oct 3, 2014 at 6:05 AM, Therneau, Terry M., Ph.D. wrote: > I've attached

[R] c() and dates

2014-10-03 Thread Therneau, Terry M., Ph.D.
I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1) > temp1 <- as.Date(1:2, origin="2000/5/3") > temp1 [1] "2000-05-04" "2000-05-05" > temp2 <- as.POSIXct(temp1) > temp2 [1] "2000-05-03 19:00:00 CDT" "2000-05-04 19:00:00 CDT" So far so good. On 5/4, midnight in Greenwich it

Re: [R] Befuddled by ddply

2014-10-03 Thread Keith S Weintraub
Dennis, Thanks for the help. I am using colwise now in a couple of places. Best, KW On Oct 2, 2014, at 12:26 PM, Dennis Murphy wrote: > plyr::colwise(defCurveBreak, y = 4)(mdf) > > It took me a few minutes to realize that defCurveBreak() took a vector > as its first argument; then it made more

[R] merge by time, certain value if 5 min before and after an "event"

2014-10-03 Thread Therneau, Terry M., Ph.D.
I've attached two functions used locally. (The attachments will be stripped off of the r-help response, but the questioner should get them). The functions "neardate" and "tmerge" were written to deal with a query that comes up very often in our medical statistics work, some variety of "get the

Re: [R] strange behavior of the compare operator

2014-10-03 Thread jim holtman
FAQ 7.31 Jim Holtman Data Munger Guru What is the problem that you are trying to solve? Tell me what you want to do, not how you want to do it. On Fri, Oct 3, 2014 at 8:31 AM, Matthias Salvisberg wrote: > I had a strange behavior of a function written a few days ago. I > pointed the problem do

[R] comparing two "half-normal production" stochastic frontier functions

2014-10-03 Thread Rainer M Krug
Hi I am using the function frontier::sfa (from the package frontier) to estimate several "half-normal production" stochastic frontier functions. Now I want to compare the coefficients of the linear frontier function and see if they are different. According to my stackexchange (CrossValidated) qu

[R] strange behavior of the compare operator

2014-10-03 Thread Matthias Salvisberg
I had a strange behavior of a function written a few days ago. I pointed the problem down to the following minimal example. can anyone explain why the following comparisons don't reply the same"correct" answer? Thanks for your reply! Matthias R version 3.1.1 (2014-07-10) -- "Sock it to Me"

Re: [R] Hadley's book: paper/PDF/etc. versus github

2014-10-03 Thread Hadley Wickham
> Hi, folks. I've got a sort of coupon that would allow me to get a > copy of "Advanced R" by Hadley Wickham at no cost. OTOH, I've already > cloned the github repository, and having the "live" Rmd files (or in > this case, rmd files) is enormously more useful to me than having any > form of elec

Re: [R] help with subscript on axis lables

2014-10-03 Thread peter dalgaard
On 03 Oct 2014, at 13:15 , Andras Farkas wrote: > Dear All, > wonder if you could help with the following:we have:vals <- 1:5names(vals) <- > paste0("ke",1:length(vals))mp <- barplot(vals, ylim = c(0, > 6),ylab=expression(paste("Hour"^"-10"))) > > In would like to make the numbers (ke1 to ke5

Re: [R] help with subscript on axis lables

2014-10-03 Thread Adams, Jean
Andras, There may be an easier way to do this, but this works. vals <- 1:5 names(vals) <- paste0("ke",1:length(vals)) mp <- barplot(vals, ylim = c(0, 6), ylab=expression(Hour^-10), names.arg="") sapply(vals, function(i) axis(1, at=mp[i], substitute(list(ke[x]), list(x=i)), tick=FALSE)) Jean On

[R] help with subscript on axis lables

2014-10-03 Thread Andras Farkas
Dear All, wonder if you could help with the following:we have:vals <- 1:5names(vals) <- paste0("ke",1:length(vals))mp <- barplot(vals, ylim = c(0, 6),ylab=expression(paste("Hour"^"-10"))) In would like to make the numbers (ke1 to ke5, respectively) in the labels of the x axis a subscript. There

Re: [R] Hadley's book: paper/PDF/etc. versus github

2014-10-03 Thread Jeff Newmiller
What a non-question. Github version for free, or PDF and github versions for free. --- Jeff NewmillerThe . . Go Live... DCN:Basics: ##.#. ##.#. Live Go...

Re: [R] Using compute.es and metafor together

2014-10-03 Thread Viechtbauer Wolfgang (STAT)
Yes, that should be fine. By the way, you do not have to name the variables 'yi' and 'vi' (if this is what you meant by 'coding these as yi and vi respectively'). Indeed, the *argument names* for supplying pre-calculated effect sizes estimates and corresponding sampling variances are 'yi' and '

[R] Hadley's book: paper/PDF/etc. versus github

2014-10-03 Thread Michael Hannon
Hi, folks. I've got a sort of coupon that would allow me to get a copy of "Advanced R" by Hadley Wickham at no cost. OTOH, I've already cloned the github repository, and having the "live" Rmd files (or in this case, rmd files) is enormously more useful to me than having any form of electronic or

[R] Using compute.es and metafor together

2014-10-03 Thread Purssell, Ed
Dear All For mathematically challenged people such as myself; is it ok to use the compute.es package to calculate effect sizes and then import the effect sizes d and variances of d into metafor, coding these as yi and vi respectively and then running the meta-analysis? This seems easier beca

Re: [R] merge by time, certain value if 5 min before and after an "event"

2014-10-03 Thread PIKAL Petr
Hi So if I understand correctly, you want to spread value "high" to times 5 minutes before its occurrence and 5 minutes after its occurrence. If your dates are not extremely big you can prepare its expanded version and use code suggestions I sent previously myframe <- data.frame (Timestamp=c("

Re: [R] merge by time, certain value if 5 min before and after an "event"

2014-10-03 Thread PIKAL Petr
Hi If Jean's guess is correct, after simple changing Timestamp to real date see ?strptime and ?as.POSIXct you can use result <- merge(mydata, myframe, all=TRUE) use function ?na.locf from zoo package to fill NAs in Event column and get rid of all rows with NA in location e.g. by ?complete.ca

Re: [R] Identifying Values in Dataframe

2014-10-03 Thread PIKAL Petr
Hi maybe which(abs(data)>0.3, arr.ind=T) row col Loss_EV_Amygdala_SF_left_hemisphere 15 2 Loss_EV_Amygdala_SF_left_hemisphere 15 3 Loss_PE_Amygdala_SF_right_hemisphere 5 7 Loss_PE_Amygdala_SF_left_hemisphere 13 9 Gives you what you want. > se