Dear Rainer
On 3 October 2014 14:51, Rainer M Krug wrote:
> I am using the function frontier::sfa (from the package frontier) to
> estimate several "half-normal production" stochastic frontier functions.
>
> Now I want to compare the coefficients of the linear frontier function
> and see if they
Wow! Never thought of trace! (obviously)
thanks!
On Fri, Oct 3, 2014 at 3:03 PM, Greg Snow <538...@gmail.com> wrote:
> Instead of making a local copy and editing, you may consider using the
> trace function with edit=TRUE, this allows you to insert things like
> print statements, but takes care
On 03/10/2014 20:18, Rui Barradas wrote:
Hello,
Inline
Em 03-10-2014 19:04, Bos, Roger escreveu:
Andrew,
I ran your code using my SQL Server database and it seems like it
worked okay for me, in that I end up with "num" data types when I read
the data back in. So it may be a setting on your d
Hello,
Inline
Em 03-10-2014 19:04, Bos, Roger escreveu:
Andrew,
I ran your code using my SQL Server database and it seems like it worked okay for me, in
that I end up with "num" data types when I read the data back in. So it may be
a setting on your database. I don't claim to know which on
Instead of making a local copy and editing, you may consider using the
trace function with edit=TRUE, this allows you to insert things like
print statements, but takes care of the environment and other linkages
for you (and is easy to untrace when finished).
On Fri, Oct 3, 2014 at 11:12 AM, Erin H
Dear Professor Dalgard,
I wondered if I might ask a general question on ‘power’. Please feel free to
ignore.
For ‘non-inferiority’ clinical trials:
H0: m1 - m2 ≤ -M
Ha: m1 - m2 > -M
But when calculations are done (normal, t, or non-central t … still learning
what this is),
Ha: m1 - m2 =
Andrew,
I ran your code using my SQL Server database and it seems like it worked okay
for me, in that I end up with "num" data types when I read the data back in.
So it may be a setting on your database. I don't claim to know which one.
BTW, I had to install 5 or 6 separate packages to get fP
You are getting a p-value, namely p=0. It's just that, when taken
literally, the p-values are wrong.
I'm not familiar with predictABEL, but my guess is that the p-value is
below 2e-16 or some such cutoff and gets printed as zero (the means
seem to be about 10 standard deviations away from zero, wh
Hi everyone!
I conducted a study for which I conducted logistic regressions (and it
works), but now I'd like to have the results per condition, and I failed to
discover how to have them. I explain myself:
In conduted a study in which participants can realize one behavior
(coded "1" if realize
I am using PredictABEL to do reclassification. When I use it to compare
two models (+/- a new marker), I get some output without a p-valve. Anyone
know why this might be?
#BEGIN R OUTPUT
NRI(Categorical) [95% CI]: 0.0206 [ 0.0081 - 0.0332 ] ; p-value: 0.00129
NRI(Continuous) [95% CI]: 0.1781 [
Note: I did raise report the issue below to r-sig...@r-project.org, but
didn't see any reply.
I'm hoping somebody on r-help can help me devise a workaround for a problem I'm
having
with RODB:
I use RODBC to read and write a good deal of data to SQL Server and I'd be
extremely grateful
if
thank you!!
On Fri, Oct 3, 2014 at 12:18 PM, Duncan Murdoch
wrote:
> On 03/10/2014 12:09 PM, Erin Hodgess wrote:
>
>> So please be prepared...
>>
>> Ok. I made a copy of the arima.r function called earima.r to put in some
>> print statements. Fair enough.
>>
>> Now when I run earima, the .Cal
On 03/10/2014 12:09 PM, Erin Hodgess wrote:
So please be prepared...
Ok. I made a copy of the arima.r function called earima.r to put in some
print statements. Fair enough.
Now when I run earima, the .Call statements call find the C subroutines.
I know that this should be a really simple fix
Well duh -- type "c.Date" at the command prompt to see what is going on. I suspected I
was being dense.
Now that the behaior is clear can I follow up on David W's comment that redfining the
c.Date function as
structure(c(unlist(lapply(list(...), as.Date))), class = "Date")
allows for a
So please be prepared...
Ok. I made a copy of the arima.r function called earima.r to put in some
print statements. Fair enough.
Now when I run earima, the .Call statements call find the C subroutines.
I know that this should be a really simple fix, but I don't know how. I do
know that the or
You can reconstruct the data from the first component. Here's an example using
singular value decomposition on the original data matrix:
> d <- cbind(d1, d2, d3, d4)
> d.svd <- svd(d)
> new <- d.svd$u[,1] * d.svd$d[1]
new is basically your cp1. If we multiply it by each of the loadings, we can
On Oct 3, 2014, at 7:19 AM, Therneau, Terry M., Ph.D. wrote:
> I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1)
>
> > temp1 <- as.Date(1:2, origin="2000/5/3")
> > temp1
> [1] "2000-05-04" "2000-05-05"
>
> > temp2 <- as.POSIXct(temp1)
> > temp2
> [1] "2000-05-03 19:00:00 CD
On Oct 3, 2014, at 7:19 AM, Therneau, Terry M., Ph.D. wrote:
> I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1)
>
> > temp1 <- as.Date(1:2, origin="2000/5/3")
> > temp1
> [1] "2000-05-04" "2000-05-05"
>
> > temp2 <- as.POSIXct(temp1)
> > temp2
> [1] "2000-05-03 19:00:00 CD
Hi Terry,
Some of that combination of sort() and approx() can be done by
findInterval(), which may be quick enough that you don't need the
'thinning' part of the code.
Bill Dunlap
TIBCO Software
wdunlap tibco.com
On Fri, Oct 3, 2014 at 6:05 AM, Therneau, Terry M., Ph.D.
wrote:
> I've attached
I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1)
> temp1 <- as.Date(1:2, origin="2000/5/3")
> temp1
[1] "2000-05-04" "2000-05-05"
> temp2 <- as.POSIXct(temp1)
> temp2
[1] "2000-05-03 19:00:00 CDT" "2000-05-04 19:00:00 CDT"
So far so good. On 5/4, midnight in Greenwich it
Dennis,
Thanks for the help. I am using colwise now in a couple of places.
Best,
KW
On Oct 2, 2014, at 12:26 PM, Dennis Murphy wrote:
> plyr::colwise(defCurveBreak, y = 4)(mdf)
>
> It took me a few minutes to realize that defCurveBreak() took a vector
> as its first argument; then it made more
I've attached two functions used locally. (The attachments will be stripped off of the
r-help response, but the questioner should get them). The functions "neardate" and
"tmerge" were written to deal with a query that comes up very often in our medical
statistics work, some variety of "get the
FAQ 7.31
Jim Holtman
Data Munger Guru
What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it.
On Fri, Oct 3, 2014 at 8:31 AM, Matthias Salvisberg
wrote:
> I had a strange behavior of a function written a few days ago. I
> pointed the problem do
Hi
I am using the function frontier::sfa (from the package frontier) to
estimate several "half-normal production" stochastic frontier functions.
Now I want to compare the coefficients of the linear frontier function
and see if they are different.
According to my stackexchange (CrossValidated) qu
I had a strange behavior of a function written a few days ago. I
pointed the problem down to the following minimal example.
can anyone explain why the following comparisons don't reply the
same"correct" answer?
Thanks for your reply!
Matthias
R version 3.1.1 (2014-07-10) -- "Sock it to Me"
> Hi, folks. I've got a sort of coupon that would allow me to get a
> copy of "Advanced R" by Hadley Wickham at no cost. OTOH, I've already
> cloned the github repository, and having the "live" Rmd files (or in
> this case, rmd files) is enormously more useful to me than having any
> form of elec
On 03 Oct 2014, at 13:15 , Andras Farkas wrote:
> Dear All,
> wonder if you could help with the following:we have:vals <- 1:5names(vals) <-
> paste0("ke",1:length(vals))mp <- barplot(vals, ylim = c(0,
> 6),ylab=expression(paste("Hour"^"-10")))
>
> In would like to make the numbers (ke1 to ke5
Andras,
There may be an easier way to do this, but this works.
vals <- 1:5
names(vals) <- paste0("ke",1:length(vals))
mp <- barplot(vals, ylim = c(0, 6), ylab=expression(Hour^-10), names.arg="")
sapply(vals, function(i) axis(1, at=mp[i], substitute(list(ke[x]),
list(x=i)), tick=FALSE))
Jean
On
Dear All,
wonder if you could help with the following:we have:vals <- 1:5names(vals) <-
paste0("ke",1:length(vals))mp <- barplot(vals, ylim = c(0,
6),ylab=expression(paste("Hour"^"-10")))
In would like to make the numbers (ke1 to ke5, respectively) in the labels of
the x axis a subscript. There
What a non-question. Github version for free, or PDF and github versions for
free.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#. ##.#. Live Go...
Yes, that should be fine.
By the way, you do not have to name the variables 'yi' and 'vi' (if this is
what you meant by 'coding these as yi and vi respectively'). Indeed, the
*argument names* for supplying pre-calculated effect sizes estimates and
corresponding sampling variances are 'yi' and '
Hi, folks. I've got a sort of coupon that would allow me to get a
copy of "Advanced R" by Hadley Wickham at no cost. OTOH, I've already
cloned the github repository, and having the "live" Rmd files (or in
this case, rmd files) is enormously more useful to me than having any
form of electronic or
Dear All
For mathematically challenged people such as myself; is it ok to use the
compute.es package to calculate effect sizes and then import the effect sizes d
and variances of d into metafor, coding these as yi and vi respectively and
then running the meta-analysis? This seems easier beca
Hi
So if I understand correctly, you want to spread value "high" to times 5
minutes before its occurrence and 5 minutes after its occurrence.
If your dates are not extremely big you can prepare its expanded version and
use code suggestions I sent previously
myframe <- data.frame (Timestamp=c("
Hi
If Jean's guess is correct, after simple changing Timestamp to real date
see ?strptime and ?as.POSIXct
you can use
result <- merge(mydata, myframe, all=TRUE)
use function ?na.locf from zoo package to fill NAs in Event column and get rid
of all rows with NA in location e.g. by
?complete.ca
Hi
maybe
which(abs(data)>0.3, arr.ind=T)
row col
Loss_EV_Amygdala_SF_left_hemisphere 15 2
Loss_EV_Amygdala_SF_left_hemisphere 15 3
Loss_PE_Amygdala_SF_right_hemisphere 5 7
Loss_PE_Amygdala_SF_left_hemisphere 13 9
Gives you what you want.
> se
36 matches
Mail list logo