Rolf can tell us for sure but I thought the goal was to use v ?
Maybe not ? Either way, I think Bert wins for shortest and Kimmo
wins for longest. IMHO, elegance is in the eye of the
beholder.
On Fri, Sep 27, 2024 at 4:35 AM Stephen Berman via R-help <
r-help@r-project.org> wrote:
> Yet ano
Chris: As David mentioned, if you have "new" data, then the interval has to
be a prediction
interval because the difference between a CI and a PI is that the PI is
constructed for data
that hasn't been seen yet. The CI is constructed for data that's already
there. I hope this helps.
On Sat, Aug 3
often have a private
> environment, and occasionally packages import functions from other
> packages by simple assignment, so they end up in the namespace
> environment of the importer but still have the namespace environment of
> the exporter associated with them. And the last diagram (
obviously, everyone has different opinions on what's useful but I always
found this document quite
helpful. I think, in the past, someone said that there are some incorrect
statements in but I'm not sure
what they are.
https://askming.github.io/study_notes/Stats_Comp/Note-How%20R%20searches%20and%
he solvers not finding the minimum?
>
> And how to avoid this? Except, maybe, checking the gradient for all
> the given constraints
>
> Thanks --HW
>
>
>
> On Fri, 21 May 2021 at 17:58, Mark Leeds wrote:
> >
> > Hi Hans: I think that you are missing minus s
Hi Hans: I think that you are missing minus signs in the 2nd and 3rd
elements of your gradient.
Also, I don't know how all of the optimixation functions work as far as
their arguments but it's best to supply
the gradient when possible. I hope it helps.
On Fri, May 21, 2021 at 11:01 AM Hans W
David: Note that your problem is linear so it looks like you can use the lm
function to estimate a, b and c. ( or as a check against what john
did ) Unless I'm missing something which could be the case ! Also, see
Bloomfield's text for a closed form solution. I think it's
called "Intro To Four
Hi: install.packages("hms") should work if you have R installed along with
an internet connection.
When you do above, if you get a message about other packages needing to be
installed, then use
install.packages("hms", dependencies = TRUE).
On Wed, Mar 17, 2021 at 1:08 PM Gregory Coats via R-
Hi: I think you're writing over the plots so only the last one exists.
Maybe try P = P + whatever but
I'm not sure if that's allowed with plots.
On Tue, Oct 27, 2020 at 8:34 AM Luigi Marongiu
wrote:
> Hello,
> I am using e1071 to run support vector machine. I would like to plot
> the data with
Hi Steven: Rui's detailed explanation was great. The way I think of it is,
if you don't
want to send the variables in with the same order as the formal
arguments, then you
better name them as you send them in.
On Sun, Sep 20, 2020 at 7:23 AM Steven Yen wrote:
> Thanks. So, to be safe, al
Hii: It's been a long time but John Fox's "Companion to Appied Regression"
book has the expressions
for the likelihood of the binomial glm. ( and probably the others also ).
Just running logLik is not so useful
because it could be leaving out multiplicative factors. If you can get your
hands on any
Hi Erin: The default for write.csv is col.names = TRUE . So, in the second
one,
if you put, col.names = FALSE, that should work. It's confused right now
because you want to append but also write the column names again.
Mark
On Thu, Jun 4, 2020 at 9:34 PM Erin Hodgess wrote:
> Hello!
>
> H
hat it is for libraries to
> obtain access to the e-books for free? It does not seem to me that an
> invididual can download one--am I missing that part?
>
> Thanks
>
> --Chris Ryan
>
> Mark Leeds wrote:
> > Abby: here's an easier link for seeing what you might like.
Abby: here's an easier link for seeing what you might like.
https://link.springer.com/search?facet-content-type=%22Book%22&package=mat-covid19_textbooks&%23038;facet-language=%22En%22&%23038;sortOrder=newestFirst&%23038;showAll=true
On Fri, May 22, 2020 at 9:18 PM Richard O'Keefe wrote:
> the r
mple is NOT an example of "messing around with environments."
>
> On Thu, 7 May 2020 at 15:36, Mark Leeds wrote:
> >
> > Hi Abby: I agree with you because below is a perfect example of where
> not understanding environments causes a somewhat
> > mysterious pr
Richard: I may have implied that one should "mess with environments" by
saying that I agree with Abby. If so, my apologies because that's not what
I meant.
I only meant understanding.
On Thu, May 7, 2020 at 12:47 AM Mark Leeds wrote:
> Hi Richard: I didn't say it was a
Hi Abby: I agree with you because below is a perfect example of where not
understanding environments causes a somewhat
mysterious problem. Chuck Berry explains it in a follow up email.
https://r.789695.n4.nabble.com/issues-with-environment-handling-in-model-frame-td4762855.html
On Wed, May 6, 2
:19 AM
Subject: Re: [R] stats:: spline's method could not be monoH.FC
To: Mark Leeds
Cc: Martin Maechler , Samuel Granjeaud
IR/Inserm , r-help
Hi Mark,
The article is good.
However, there's still some grey areas.
The documentation for base::typeof equates a closure with a function.
Ho
should just stick to "Self-Referencing Function Objects" and
> "Functions Bundled with Data"...???
>
> One last thing, the last time I read S4 documentation, I couldn't tell
> if it was possible to have S4-based function objects, and if so, could
> the bo
it's been a long time but I vaguely remember Rvmminb computing
gradients ( and possibly hessians )
subject to constraints. John can say more about this but, if one is going
to go through the anguish of
creating a fitdstr2, then you may want to have it call Rvmminb instead of
whatever is cur
Hi Abby: Either way, thanks for your efforts with the derivative plot.
Note that John Nash is a SERIOUS EXPERT in optimization so I would just go
by what he
said earlier. Also, I don't want to speak for Duncan but I have a feeling
that he meant "inadequacy" in the CG
method rather than a bug in
or possibly even more appropriate is quant.stackexchange.com.
On Thu, Mar 5, 2020 at 4:38 AM Eric Berger wrote:
> Alternatively you might try posting to
> r-sig-fina...@r-project.org
>
>
>
> On Wed, Mar 4, 2020 at 9:38 PM Bert Gunter wrote:
>
> > Your question is way off topic here -- this lis
I nominate the last sentence of Rolf's comment as a fortune.
On Thu, Jan 16, 2020 at 3:48 PM Rolf Turner wrote:
>
> On 17/01/20 1:55 am, Rui Barradas wrote:
>
> > Hello,
> >
> > What column and what list?
> > Please post a reproducible example, see the link at the bottom of this
> > mail and [
ment.
Mark
On Sat, Sep 28, 2019 at 3:36 AM Berwin A Turlach
wrote:
> G'day Mark,
>
> On Fri, 27 Sep 2019 14:43:28 -0400
> Mark Leeds wrote:
>
> > correction to my previous answer. I looked around and I don't think
> > it's called the donsker effect.
&
correction to my previous answer. I looked around and I don't think it's
called the donsker effect. It seems to
jbe referred to as just a case of "perfect separability.". if you google
for" perfect separation in glms", you'll get a
lot of information.
O
Hi: In your example, you made the response zero in every case which
is going to cause problems. In glm's, I think they call it the donsker
effect. I'm not sure what it's called
in OLS. probably a lack of identifiability. Note that you probably
shouldn't be using zeros
and 1's as the response in a
Hi: the F-test is a joint hypothesis ( I never used that function from the
car package but it sounds like it is ) and the t-statistics
that come out of a regression are "conditional" in the sense that they
test the significance of one coefficient given the other so you wouldn't
expect the two ou
Hi All: I lately get a lot more spam-porn type emails lately also but I
don't know if they are due to me being on
the R-list.
On Tue, Apr 17, 2018 at 5:09 PM, Rui Barradas wrote:
> Hello,
>
> Nor do I, no gmail, also got spam.
>
> Rui Barradas
>
> On 4/17/2018 8:34 PM, Ding, Yuan Chun wrote:
>
See Hadley's advanced R along Thomas Mailund's books. I haven't gone
through them carefully but they both
seem (from what I've looked at ) to be the best ones for that. Mentions of
others are appreciated.
On Tue, Mar 13, 2018 at 5:26 PM, Nik Tuzov wrote:
>
> Hello:
>
> Could you please sugge
$x, i$y, col = i$B)
points(j$x, j$y, col = j$B)
On Sat, Aug 5, 2017 at 5:59 AM, Myles English
wrote:
>
> The answer was (thanks to Mark Leeds) to do with the use of a factor
> instead of a vector.
>
> on [2017-08-05] at 08:57 Myles English writes:
>
> > I am having trouble u
Hi: The R package below may be of use to you.
https://journal.r-project.org/archive/2009-1/RJournal_2009-1_Ardia+et+al.pdf
On Thu, Jun 29, 2017 at 12:15 PM, Ranjan Maitra wrote:
> Would package "teigen" help?
>
> Ranjan
>
> On Thu, 29 Jun 2017 14:41:34 +0200 vare vare via R-help <
> r-help@r-p
my bad david. thanks for info.
On Fri, Sep 30, 2016 at 12:37 AM, David Winsemius
wrote:
>
> > On Sep 29, 2016, at 8:57 PM, Mark Leeds wrote:
> >
> > someone who moderates the list, myself included, may have mistakened it
> for
> > spam and rejected it. In that
someone who moderates the list, myself included, may have mistakened it for
spam and rejected it. In that case, it never got to the list.
On Thu, Sep 29, 2016 at 4:53 PM, Duncan Murdoch
wrote:
> On 29/09/2016 2:38 PM, Joysn71 wrote:
>
>> Hello,
>>
>> a few weeks ago i subscribed to this list a
Hi Bert: I saw that and let it through. I am not the one to ask but as far
as I know,
the filtering has not changed.
On Thu, Sep 8, 2016 at 8:35 PM, Bert Gunter wrote:
> To all:
>
> r-help has been holding up a lot of my recent messages: Have there
> been any changes to help list filters that c
cs right? Bayesian Model Averaging, G-ARCH models for
> heteroscedasticity, etc.
>
> Anyway... roll::roll_lm, cheers!
>
> Thanks,
> Jeremiah
>
>
>
> On Thu, Jul 21, 2016 at 2:08 PM, Mark Leeds wrote:
>
>> Hi Jermiah: another possibly faster way would be to use
Hi Jermiah: another possibly faster way would be to use a kalman filtering
framework. I forget the details but duncan and horne have a paper which
shows how a regression can be re-computed each time a new data point is
added .I
forget if they handle taking one off of the back also which is what you
Hi: I don't have time to look at the details of what you're doing but the
"equivalence"
between state space and arima ( as paul gilbert pointed out a few weeks ago
) is not a true equivalence.
if you are in an area of the parameter space that the state space
formulation
can't reach, then you won
and just to add to john's comments, since he's too modest, in my
experience, the algorithm in the rvmmin package ( written by john ) shows
great improvement compared to the L-BFGS-B algorithm so I don't use
L-BFGS-B anymore. L-BFGS-B often has a dangerous convergence issue in
that it can claim
Hi: You'd have to provide a dput of "model2" and "Country" for anyone to
give a definitive answer but my guess is that you have an orthogonal X
matrix which is causing you to fit the
model perfectly which causes the model residuals to be zero.
Also, you didn't explain what you're doing but modelli
og
likelihoods in both models. I would calculate them yourself and all the
constants like 1/radical 2pi don't need to be included of course since
they'll just be scaling factors.
On Wed, Sep 23, 2015 at 2:22 AM, Rolf Turner
wrote:
> On 23/09/15 16:38, Mark Leeds wrote:
thanks Rolf for the correct response.
Oh, thing that does still hold in my response is the AIC approach unless
Rolf
tells us that it's not valid also. I don't see why it wouldn't be though
because you're
not doing a hypothesis test when you go the AIC route.
On Wed, Sep
praise, and learned discourse to
> me privately, as I have already occupied more space on the list than
> I should.
>
> Cheers,
> Bert
>
>
> Bert Gunter
>
> "Data is not information. Information is not knowledge. And knowledge
> is certainly not wisdom."
&g
That's true but if he uses some AIC or BIC criterion that penalizes the
number of parameters,
then he might see something else ? This ( comparing mixtures to not
mixtures ) is not something I deal with so I'm just throwing it out there.
On Tue, Sep 22, 2015 at 4:30 PM, Bert Gunter wrote:
> Tw
Hi Fabian: I think one would say that that is not a bug. I looked at the
details of arima.sim ( using debug(arima.sim) )
and there are two different series that are created inside the function.
one is called innov and the other is start.innov. start.innov is
used to create a burn in period for the
Hi All: In case anyone is interested, Norm's new book, "parallel computing
for data science" is out on amazon. It already got raving reviews from
Dave Giles who runs a popular econometrics blog.
Mark
[[alternative HTML version deleted]]
__
Hi All: I have a regular expression problem. If a character string ends
with "rhofixed" or "norhofixed", I want that part of the string to be
removed. If it doesn't end with either of those two endings, then the
result should be the same as the original. Below doesn't work for the
second case. I kn
Just a heads up to list: I don't know about other book sites but, on U.S
Amazon, Hadley's Advanced R book is no longer in pre-order mode. You can
purchase the book now without pre-ordering it.
Mark
[[alternative HTML vers
I have a feeling hadley's book will be quite popular so just a heads up
that it
can now be pre-ordered on amazon.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do re
reader to
>> easily navigate environment-space and figure out how R searches for
>> symbols and where it finds them. I'll replace my map with yours if you
>> can do that - we will all be better off for it!
>>
>> # - last comment ---
>>
>> Dunc
Hi Everyone: Suraj will respond to Duncan's comments below promptly. Suraj
doesn't have the original thread so I am just helping out by commenting
here so that he can respond and the thread can be kept continuous.
Mark
On Sun, Mar 9, 2014 at 9:09 AM, Duncan Murdoch wrote:
> On 14-03-08 6:42
Hi: I asked Bert privately and he recommended posting what I asked/said to
him to the list.
My comment/question was that I looked at the code and didn't actually
see an ordered factor being created. So my guess is that there is
a confusion with the use of the term "ordered". I'm not clear on wheth
Steven: I'm not sure if it makes a difference but you might want to start
off with the "square root" of sigmaStart because that will really start you
off with sigmaStart. Essentially, what
Bill is doing is a reparameterization using the correlation and the 2
standard deviations so compute those bas
ting that the
> objective is inadmissible in a line search can often be simply a shortening
> of the step size.
>
> JN
>
> On 13-10-21 06:00 AM, r-help-requ...@r-project.org wrote:
>
>> Message: 34
>> Date: Mon, 21 Oct 2013 05:56:45 -0400
>> From: Mark Leeds
&g
thanks bill. that's a neat trick that I haven't seen before. now I see what
you're saying
much more clearly. steven: bill's method should be faster than mine because
it won't
have rejection iterations like my idea will.
On Mon, Oct 21, 2013 at 10:52 AM, William Dunlap wrote:
> Try defining t
my mistake. since nlminb is minimizing, it should be +Inf ( so that the
likelihood
is large ) as you pointed out. Note that this approach is a heuristic and
may not work all the time.
On Mon, Oct 21, 2013 at 3:01 AM, Steven LeBlanc wrote:
>
> On Oct 20, 2013, at 9:54 PM
Bill: I didn't look at the code but I think the OP means that during the
nlminb algorithm,
the variance covariance parameters hit values such that the covariance
matrix estimate becomes negative definite.
Again, I didn't look at the details but one way to deal with this is to
have the likelihood
f
hi: my guess is that no one is answering because it's too hard to follow
your code
because it contains so many indices and variables and is without comments.
I don't know where you got that info about the distribution of the
coefficient when doing the ADF test but if you could write the code more
see the hatvalues function in the car package. also, I highly recommend
john's
CAR book. there's a new edition that came out a year or so ago.
On Fri, Jul 19, 2013 at 6:14 PM, Noah Silverman wrote:
> Hello,
>
> I'm working on some fairly standard regression models (linear, logistic,
> and poiss
Hi George: Assuming it's still relevant, the link below will explain why.
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
On Thu, Jul 18, 2013 at 2:14 PM, George Milunovich <
george.milunov...@mq.edu.au> wrote:
> Dear all,
> When I run an arima(1,1,1) on an I(1) variable, y, I get different
Hi: see andrew harvey's books for the detailed discussion. His earlier one
( I forget the
title names ) is more comprehensive. But I bet they both talk about it.
what you have is an "almost" random walk with noise model but the
coefficient on the ar(term) is not 1. you can estimate that using the
hat you know will
>
> fail sometimes when you know an easy way to make it work
>
> in all situations.
>
> ** **
>
> Bill Dunlap
>
> Spotfire, TIBCO Software****
>
> wdunlap tibco.com
>
> ** **
>
> *From:* Mark Leeds [mailto:marklee...@gma
hi bill: I understand what you're doing but, atleast for this case, I
checked and you don't need the force this one. it works without it. so, I
think the force requirement applies only when you're building them up with
the lapply. but definitely I'm opened to clarification. thanks.
On Thu, Ju
Hi Bert: given what Hadley and Rstudio have provided to the R-community,
what's the big deal of
letting people know about a class. It's the ideal place to send the notice.
and yes, as Barry
and John said, every other commercial entity does send to the R-list.
Mark
On Tue, Apr 16, 2013 at 2:1
Hi: rollapply is a fine solution but if you frame your problem as a state
space
model where the observation equation is the linear regression you're
interested in,
the state space framework provided by the DLM package will give you the
output
you want at each step t. the gory details are explained
Hi: Google for koyck distributed lag. Based on what you wrote, I think
that's what you're looking for or something close to it.
There is tons of literature on that model and if you read enough about it,
you'll see that
through a transformation, reduces to something that much simpler to
estimate.
Hi Andrew: Not that I've gone through it all yet but the draft of hadley's
book at https://github.com/hadley/devtools/wiki/Introduction has a lot if
not all of the commands you refer to and all of their gory details along
with many examples. No matter what you're budget, given that the book will
b
:
> Thank you very much my friend.
> El 21/02/2013 11:21, "Mark Leeds" escribió:
>
>> let me look. I have hardcopy so hopefully I have computer copy. get back
>> in a few minutes.
>>
>>
>> On Thu, Feb 21, 2013 at 11:06 AM, Paul Bernal wrote:
>&
Hi: bierens has a paper on modelling beer sales in the netherland ( I'm
pretty sure it's on the net.
if not, I have a copy somewhere I think ) using an ARIMAX. why don't you
take his paper and his data and see if you get the same estimates using R.
That's one way if you'll know if you're doing the
Hi: Like I said earlier, you really should read west and harrison first,
especially if you're
a beginner in bayesian methods. giovanni's book and package are both very
nice ( thanks giovanni ) but the book is more of a summary of west and
harrison and sort of assumes some familarity with the materi
Hi: I'm not at all familiar with the r-inla package so I can't help you
there. But any arima model can
be re-cast into its state space equivalent form. ( I think Jeremy Penzer
wrote a paper for showing how this is done in general ) So, one way would
be to convert the arima (1,0,1) model for exampl
ion
> - add figure width and height
> - add the figure label fig:figthree
>
> You, as the author, only write the R code (instead of both R and
> LaTeX); other people, as the readers, only see the R code. Everything
> else should go behind the scene.
>
> Regards,
> Yihui
ow it works out. thanks again.
On Wed, Feb 13, 2013 at 6:50 PM, Nordlund, Dan (DSHS/RDA) <
nord...@dshs.wa.gov> wrote:
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> > project.org] On Behalf Of Mark Leeds
> > Sent: W
hi: the irlba package does what you're looking for.
On Thu, Jan 31, 2013 at 3:32 AM, Pierrick Bruneau wrote:
> Hi everyone,
>
> I am using eigen() to extract the 2 major eigenpairs from a large real
> square symmetric matrix. The procedure is already rather efficient, but
> becomes somehow slow
Hi Dennis: One of function argument matching rules in R that arguments
after the dotdotdot have to be matched exactly ( see code for paste below )
so that's why your attempt doesn't work. But I would have been surprised
also so I'm not trying to imply that one should know which functions have
arg
I neglected to mention that, once you get either I_theta or some empirical
estimate
of it, you then invert it to get an estimate of the asymptotic covariance
matrix of the
MLE.
On Tue, Jan 22, 2013 at 3:48 PM, Mark Leeds wrote:
> Hi Doug: I was just looking at this coincidentally. When X i
Hi Doug: I was just looking at this coincidentally. When X is a vector, the
Fisher Information I_{theta} = the negative expectation of the second
derivatives of the log likelihood. So it's a matrix. In other words,
I_theta = E(partial^2 /partial theta^2(log(X,theta).) where X is a vector.
But, ev
aks)
counts=c(287,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,212,2624,2918,0,0,0,75,36317,4963,0,0,2462,0,0,0,0,0,142)
percentage=counts/sum(counts)
temp <- barplot(percentage,xlab="",space=1.0, axes=FALSE)
print(temp)
axis(1,labels=binstrings,at=seq(1.5,83.5,2),cex.axis=0.8)
axis(2,labels=seq(0,1,0.1),at=seq(0,1,0.1),cex.axis=0.8)
On Mon, Jan 21, 2013 at 8:26 PM, Mark L
using as.character. Then, in the call
to barplot, include names.arg = binstrings. I think that should work.
On Mon, Jan 21, 2013 at 8:16 PM, hp wan wrote:
> But the x-axis of barplot is still not what I want. The xlab is breaks,
> not -1.55,-1.50,,0.55.
>
>
> 2013/1/22 Mark Lee
n Mon, Jan 21, 2013 at 7:47 PM, hp wan wrote:
> Ok, that is no problem.
>
>
> 2013/1/22 Mark Leeds
>
>> let me look at but it's probably best to send to the whole list because
>> there are many
>> people on it way more knowledgable than myself. I'm ccin
, that is x axis correspond to breaks.
>
> After ?barplot, I also have no idea to implement it.
>
> 2013/1/22 Mark Leeds
>
>> I'm not sure that I understand but can't you just take the data and
>> divide it by the sum of the data and plot that ?
>>
>>
>
Hi: If you want testFun to know about b, then you would have to do
b<-list(...)$b inside
TestFun itself. But the dot dot dot argument is not really for that
purpose.
The use of dotdotdot is for the case where a function INSIDE testFun has a
formal argument named say b. Then you can pass the ... a
Below has nothing to do with R but people into statistics ( or even those
not into statistics. it's very basic ) might find it interesting. I can't
say anything about the book itself.
http://press.princeton.edu/chapters/s8863.pdf
[[alternative HTML version deleted]]
_
Hi: Not sure if totally relevant because I didn't read it but it might help
or atleast
be of interest.
http://journal.r-project.org/archive/2010-1/RJournal_2010-1_Wilhelm+Manjunath.pdf
On Mon, Nov 12, 2012 at 12:15 PM, Ben Bolker wrote:
> Zhenglei Gao bayer.com> writes:
>
> > I have asked th
Hi: Assuming that you're talking about "MIDAS", that question is more
relevant to R-Sig-Finance. I don't know if there's anything in R but google
for eric ghysels website because he is the originator of that approach. He
may have some code ( my guess is matlab
rather than R ) at his site ?
On
Hi Josh and Elaine: John Fox's CAR book ( the companion to his applied
regression text ) is really great for implementing GLMs in R. It also has a
brief but quality discussion of the theory
behind them. His text goes into more detail. Dobson's "Introduction to
generalized linear models" is also de
Hi : I looked at the help for system.time but I still have the following
question. Can someone explain the output following output
of system.time :
user system elapsed
12399.681 5632.352 56935.647
Here's my take based on the fact that I was doing ps -aux | grep R off and
on and
Hi: I've looked around and I must be missing it because it's probably
somewhere. Does someone know how to convert an object of class dgCmatrix
to a regular matrix. I can send someone the data if they need it but it's
too big to include here.
I read the data in using
temp<-readMat("movielens.mat
Hi: Bonferroni can be used for any hypothesis test or confidence interval
where a statistic
is calculated. The idea behind it is that, if a statistic is being
calculated many times ( as in the case of say anova where multiple
differences between groups can be calculated ), then the critical value
u
Hi: I can't go into all the details ( Lutz Hamel has a very nice intro book
for SVM's and I wouldn't
do the details justice anyway ) but the objective function in an SVM is
maximizing the margin ( think of the margin as the amount of seperation
between the 2 classes in a 2 class problem ). The obje
Hi Rui: I hate to sound like a pessimist/cynic and also I should state that
I didn't look
at any of the analysis by you or the other person. But, my question, ( for
anyone who wants to chime in ) is: given that all these olympic 100-200
meter runners post times that are generally within 0.1-0.3 sec
thanks peter. I was thinking more about t but you're right in that there's
an i there also. my
bad ( twice ).
On Mon, Jun 25, 2012 at 9:37 AM, Petr Savicky wrote:
> On Mon, Jun 25, 2012 at 05:39:45AM -0700, Kathie wrote:
> > Dear R users,
> >
> > I'd like to compute X like below.
> >
> > X_{i
Hi Kathryn: I'm sorry because I didn't read your question carefully enough.
arima.sim won't help because you don't have a normal error term. I think
you have to loop unless someone else knows of a way that I'm not aware of.
good luck.
On Mon, Jun 25, 2012 at 8:39 AM, Kathie wrote:
> Dear R us
Hi: Use arima.sim because when you have recursive relationships like that,
there's no way to
not loop. If arima.sim doesn't allow for the trend piece (0.1*t ), then you
can do that part seperately using cumsum(0.1*(1:t)) and then add it back in
to what arima.sim gives you. I don't remember if arim
Hi: I don't know anything about gentoypes but it sounds like you overfitted
the training set so you should try using regularization. In standard
svm-classification algorithms, that can be done by decreasing the parameter
C which decreases the objective functional penalty for mis-classifying. (
allo
just one other thing about the AIC issue:
there is a line in glm.fit which is the following:
aic = aic(y, n, mu, weights, dev) + 2 * rank
but I couldn't find the function aic so I couldn't investigate further. It
looks suspicious though because it seems to me like
it should be
aic = -2*likelih
Hi Duncan: I don't know if the following can help but I checked the code
and logLik defines the log likelihood as (p - glmobject$aic/2) where p is
the glmobject$rank. So,
the reason for the likelihood being less is that, in the null, it ends up
being ( 1 - glmobject$aic/2) and in the other one i
Hi: Thanks for the correction and reference. Eric uses monthly returns in
the example
in his book and I would think that using daily data would result in very
unstable betas but I've been wrong before. Hopefully others can comment.
Mark
On Fri, May 25, 2012 at 12:44 PM, and_mue wrote:
> Fo
Hi: I don't have time to look at it carefully but, at a glance, you're not
getting a significant
ror_spi_resn coeffficent so worrying about residuals being auto-correlated
is jumping
the gun because you're not really filtering anything in the first place.
when you say, "market model", I don't know
Hi: I just wanted to let the R-community know that Thomas Lumley was
elected Fellow of the American Stastistical Association.
Congratulations Thomas.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/
Hi: I sent you an email earlier privately. why you keep sending the same
email over
and over is not clear to me. ? the package by rossi et al, called "bayesm",
has a function in it that supposedly does what you want. I don't know the
details of the function because I was using
their package for som
1 - 100 of 141 matches
Mail list logo