I have just changed my email address with the R-help mailing lists, and
I want to check that things are working. Sorry for the noise.
cheers,
Rolf Turner
--
Honorary Research Fellow
Department of Statistics
University of Auckland
Stats. Dep't. (secretaries) phone:
+64-9-373-7599 ext
On 24/07/2021 11:22 a.m., Andrew Simmons wrote:
Hello,
I was wondering if anyone has a way to test if a package is currently being
installed. My solution was to check if environment variable "R_INSTALL_PKG"
was unset, something like:
"R CMD INSTALL-ing" <- function ()
!is.na(Sys.getenv("R_INST
Does ?installed.packages help?
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
On Sat, Jul 24, 2021 at 8:30 AM Andrew Simmons wrote:
> Hello,
>
>
> I was wonderi
Hello,
I was wondering if anyone has a way to test if a package is currently being
installed. My solution was to check if environment variable "R_INSTALL_PKG"
was unset, something like:
"R CMD INSTALL-ing" <- function ()
!is.na(Sys.getenv("R_INSTALL_PKG", NA))
Unfortunately, I couldn't find wha
In case it is of interest this problem can be solved with an
unconstrained optimizer,
here optim, like this:
proj <- function(x) x / sqrt(sum(x * x))
opt <- optim(c(0, 0, 1), function(x) f(proj(x)))
proj(opt$par)
## [1] 5.388907e-09 7.071068e-01 7.071068e-01
On Fri, May 21, 2021 a
I meant:
x0 = c (1, 1e-3, 0)
Not:
x0 = c (1, 1e6, 0)
So, large intentional error may work too.
Possibly, better...?
On Thu, May 27, 2021 at 6:00 PM Abby Spurdle wrote:
>
> If I can re-answer the original post:
> There's a relatively simple solution.
> (For these problems, at least).
>
> #wrong
If I can re-answer the original post:
There's a relatively simple solution.
(For these problems, at least).
#wrong
x0 = c (1, 0, 0)
NlcOptim::solnl(x0, objfun = f, confun = conf)$par
Rdonlp2::donlp2(x0, fn = f, nlin = list(heq), nlin.lower = 0,
nlin.upper = 0)$par
#right
x0 = c (1, 1e6, 0)
NlcOpt
I need to retract my previous post.
(Except the part that the R has extremely good numerical capabilities).
I ran some of the examples, and Hans W was correct.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mai
As someone who works on trying to improve the optimization codes in R,
though mainly in the unconstrained and bounds-constrained area, I think my
experience is more akin to that of HWB. That is, for some problems -- and
the example in question does have a reparametrization that removes the
constrai
I received an off-list email, questioning the relevance of my post.
So, I thought I should clarify.
If an optimization algorithm is dependent on the starting point (or
other user-selected parameters), and then fails to find the "correct"
solution because the starting point (or other user-selected
Sorry, missed the top line of code.
library (barsurf)
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provi
For a start, there's two local minima.
Add to that floating point errors.
And possible assumptions by the package authors.
begin code
f <- function (x, y, sign)
{ unsign.z <- sqrt (1 - x^2 - y^2)
2 * (x^2 - sign * y * unsign.z)
}
north.f <- function (x, y) f (x, y, +1)
south.f <- f
Yes. "*on* the unit sphere" means on the surface, as you can guess
from the equality constraint. And 'auglag()' does find the minimum, so
no need for a special approach.
I was/am interested in why all these other good solvers get stuck,
i.e., do not move away from the starting point. And how to av
Sorry, this might sound like a poor question:
But by "on the unit sphere", do you mean on the ***surface*** of the sphere?
In which case, can't the surface of a sphere be projected onto a pair
of circles?
Where the cost function is reformulated as a function of two (rather
than three) variables.
I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster, thoug
Hi Hans: I can't help as far as the projection of the gradient onto the
constraint but it may give insight just to see what the value of
the gradient itself is when the optimization stops.
John Nash ( definitely one of THE expeRts when it comes to optimization in
R )
often strongly recommends to
Mark, you're right, and it's a bit embarrassing as I thought I had
looked at it closely enough.
This solves the problem for 'alabama::auglag()' in both cases, but NOT for
* NlcOptim::solnl -- with x0
* nloptr::auglag -- both x0, x1
* Rsolnp::solnp -- with x0
* Rdonlp::donlp
Hi Hans: I think that you are missing minus signs in the 2nd and 3rd
elements of your gradient.
Also, I don't know how all of the optimixation functions work as far as
their arguments but it's best to supply
the gradient when possible. I hope it helps.
On Fri, May 21, 2021 at 11:01 AM Hans W
Just by chance I came across the following example of minimizing
a simple function
(x,y,z) --> 2 (x^2 - y z)
on the unit sphere, the only constraint present.
I tried it with two starting points, x1 = (1,0,0) and x2 = (0,0,1).
#-- Problem definition in R
f = function(x) 2 * (x[1]^2 -
My apologies for the noise. Please ignore this message.
I am just trying to test out message filters in a new mail client that
I am learning to use.
Again, sorry for the noise.
cheers,
Rolf Turner
--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7
That is logically impossible.
You can only show that there is insufficient evidence (according to
whatever evidentiary criterion you have chosen) to show that the data were
*not* a (iid or other) sample from a Poisson. This may seem esoteric, but
it is not. (The simplest incantation is that you ca
Your first check might be to see in the mean and sd are "reasonably"
close. Next approach would be to see if the `qqplot` of that vector has
an arguably straight-line relationship with a random draw from a Poisson
random generator function with the same mean.
?rpois
?qqplot
And do remember t
Dear friends,
I have a sample dataset, which is basically the number of transits through
a particular waterway, and is on a daily basis.
MyDat <- dataset$DailyTransits
What I´d like to do is to test whether MyDat follows a poisson distribution
or not. What R function could accomplish this?
Any
That worked. Thanks.
From: Michael Dewey
Sent: Sunday, February 16, 2020 5:54 PM
To: Servet Ahmet Çizmeli ;
r-help@r-project.org
Subject: Re: [R] testing my package : unstated dependency to self in package
tests
When something similar happened to me I found
I think the Posting Guide would call this the wrong mailing list for this
question: should be on R-package-devel.
On February 16, 2020 3:03:55 AM PST, "Servet Ahmet Çizmeli"
wrote:
>I am updating my CRAN package geoSpectral. I get the following Warning
>during R CMD check :
>
>...
>* checking f
When something similar happened to me I found it went away when I added
Suggests:
to the DESCRIPTION file. Whether this will work for you I have no idea.
Michael
On 16/02/2020 11:03, Servet Ahmet Çizmeli wrote:
I am updating my CRAN package geoSpectral. I get the following Warning during R
C
I am updating my CRAN package geoSpectral. I get the following Warning during R
CMD check :
...
* checking for unstated dependencies in �tests� ... WARNING
'library' or 'require' call not declared from: �geoSpectral�
All the .R files I have under the testhat directory begin by :
library(ge
Hi Nancy,
The chickwts dataset contains one sort-of continuous variable (weight)
and a categorical variable (feed). Two things that will help you to
understand what you are trying to do is to "eyeball" the "weight"
data:
# this shows you the rough distribution of chick weights
hist(chickwts$weight
Categorical data cannot be normal. What you are doing is statistical
nonsense, as your error messages suggest. You need to consult a local
statistician for help.
Furthermore, statistical questions are generally OT on this list, which is
about R programming.
Bert Gunter
"The trouble with having
Hello
I have data that are categorical both independent variable and dependent as
well having levels more than 3. How can i check the normality of my data?
I have tried the example given of Shapiro-Wilk for levels of factors
data
summary(chickwts)
## linear model and ANOVA
fm <- lm(weight ~ feed
You could start by going to the CRAN website, clicking on the "Task Views" item
on the left, then clicking on "Spatial". This will bring you to a page with
extensive information about doing things with spatial data in R. It includes
some brief descriptions of the purposes/capabilities of many sp
Wrong list.
For statistical questions (which this is), post to
stats.stackexchange.com. I suspect you will have to frame your query
more coherently (context, model, etc.) to get a response even there.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
Good afternoon,
I would like to know how to test for homogeneity in spatial sure models.
Thank you,
Pilar
Pilar González Casimiro
Facultad de Ciencias Económicas y Empresariales
Avda. Lehendakari Aguirre, 83 48015 Bilbao
tfno: 94 601 3730
__
R
Dear Micheal
So I would be much better off just reporting the PCA as is and conclude what i
can from plot
cheers
Julian
Julian R. Marchesi
Deputy Director and Professor of Clinical Microbiome Research at the Centre
for Digestive and Gut Health, Imperial College London, London W2 1NY Tel: +4
Significance tests for group differences in a MANOVA of
lm(cbind(pc1, pc2) ~ group)
will get you what you want, but you are advised DON'T DO THIS, at least
without a huge grain of salt and a slew of mea culpas.
Otherwise, you are committing p-value abuse and contributing to the
notion that sign
, Julian; r-help@r-project.org
Subject: RE: [R] testing whether clusters in a PCA plot are significantly
different from one another
In that case you should be able to use manova where pc1 and pc2 are the
independent (response) variables and group (Baseline, HFD+P, HFD) is the
dependent (explanatory
: Marchesi, Julian [mailto:j.march...@imperial.ac.uk]
Sent: Friday, January 6, 2017 9:02 AM
To: David L Carlson
Subject: Re: [R] testing whether clusters in a PCA plot are significantly
different from one another
Dear David
The clusters are defined by the metadata which tells R where to draw the lines
Rplot_PCA.pdf
Description: Rplot_PCA.pdf
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented
You can't if I understand correctly: there is no individual subject
regression coefficient, only a variance component for a random subject
intercept. Do you mean that you want to "test" whether that component
is nonzero ?(It is, of course). If so, IIRC, lmer eschews such tests
for technical reason
Hi R-help,
I have an lmer logistic regression with a within subjects IV and subject as
a random factor:
model <- lmer(optimal_choice ~ level_one_value_difference + (1|subid), data
= dat)
What I want is to test if the individual subject regression coefficient is
significantly different from 0.
The variable DISPLAY is what is causing problems. Run the command 'unset
DISPLAY' before running R .
Stephen
On 09/03/16 01:06 PM, Uwe Ligges wrote:
> I do not get this: If it works it is OK to use it. If it does not work,
> you can't
>
> Best,
> Uwe Kigges
>
>
>
>
>
> On 09.03.2016 17
I do not get this: If it works it is OK to use it. If it does not work,
you can't
Best,
Uwe Kigges
On 09.03.2016 17:44, Santosh wrote:
Thanks for your response. Since the test failed due to X11 connectivity
reasons, is it okay to use it in applications where X11 server connectivity
is
Thanks for your response. Since the test failed due to X11 connectivity
reasons, is it okay to use it in applications where X11 server connectivity
is not required?
Thanks and much appreciated,
Santosh
On Tue, Mar 8, 2016 at 10:41 PM, Uwe Ligges wrote:
>
>
> On 09.03.2016 02:19, Santosh wrote:
>
On 09.03.2016 02:19, Santosh wrote:
Dear Rxperts..
I installed rJava on 64-bit Linux system and apparently it installed
without errors.However, I got the following error message when I tried to
test the installed package.
Err
Dear Rxperts..
I installed rJava on 64-bit Linux system and apparently it installed
without errors.However, I got the following error message when I tried to
test the installed package.
Error in .jcall("RJavaTools", "Ljava/lang/O
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible c
The code given below estimates a VEC model with 4 cointegrating vectors. It
is a reproducible code, so just copy and paste into your R console (or
script editor).
nobs = 200
e = rmvnorm(n=nobs,sigma=diag(c(.5,.5,.5,.5,.5)))
e1.ar1 = arima.sim(model=list(ar=.75),nobs,innov=e[,1])
e2.ar1 = arima.sim
And I probably should have included this link:
http://journal.r-project.org/archive/2014-1/loo.pdf
On 8/8/2015 12:50 PM, Robert Baer wrote:
On 8/6/2015 5:25 AM, Federico Calboli wrote:
Hi All,
let’s assume I have a vector of letters drawn only once from the
alphabet:
x = sample(letters, 1
On 8/6/2015 5:25 AM, Federico Calboli wrote:
Hi All,
let’s assume I have a vector of letters drawn only once from the alphabet:
x = sample(letters, 15, replace = F)
x
[1] "z" "t" "g" "l" "u" "d" "w" "x" "a" "q" "k" "j" "f" "n" “v"
y = x[c(1:7,9:8, 10:12, 14, 15, 13)]
I would now like to t
On Aug 7, 2015, at 12:22 AM, Federico Calboli wrote:
>
>> On 7 Aug 2015, at 01:59, Bert Gunter wrote:
>>
>> Boris:
>>
>> You may be right, but it seems like esp to me based on the op's
>> non-description of likelihood of coming from the same noisy process. My
>> response would be: seek loca
> On 7 Aug 2015, at 01:59, Bert Gunter wrote:
>
> Boris:
>
> You may be right, but it seems like esp to me based on the op's
> non-description of likelihood of coming from the same noisy process. My
> response would be: seek local statistical help, as your replies indicate a
> good deal of s
Boris:
You may be right, but it seems like esp to me based on the op's
non-description of likelihood of coming from the same noisy process. My
response would be: seek local statistical help, as your replies indicate a
good deal of statistical confusion.
Cheers,
Bert
On Thursday, August 6, 2015
You are looking for what is known as the "Cayley distance" between vectors - an
edit distance that allows only transpositions. RSeek mentions PerMallows
(https://cran.r-project.org/web/packages/PerMallows/PerMallows.pdf) and
Rankluster
(https://cran.r-project.org/web/packages/Rankcluster/Rankcl
> On 6 Aug 2015, at 15:40, Bert Gunter wrote:
>
> Define "goodness of match" . For exact matches, see ?"==" , all.equal, etc.
Fair point. I would define it as a number that tells me how likely it is that
the same (noisy) process produced both lists.
BW
F
>
> Bert
>
> On Thursday, Aug
Define "goodness of match" . For exact matches, see ?"==" , all.equal, etc.
Bert
On Thursday, August 6, 2015, Federico Calboli
wrote:
> Hi All,
>
> let’s assume I have a vector of letters drawn only once from the alphabet:
>
> x = sample(letters, 15, replace = F)
> x
> [1] "z" "t" "g" "l" "u"
Hi All,
let’s assume I have a vector of letters drawn only once from the alphabet:
x = sample(letters, 15, replace = F)
x
[1] "z" "t" "g" "l" "u" "d" "w" "x" "a" "q" "k" "j" "f" "n" “v"
y = x[c(1:7,9:8, 10:12, 14, 15, 13)]
I would now like to test how good a match y is for x. Obviously I can
Look no further! The answer is yes.
However, if you are interested in why your query is probably nonsense
and why overall tests of significance are a **really bad idea** in
most scientific contexts (imho, anyway), then I suggest you post to a
statistical list like stats.stackexchange.com .
...
Dear R-colleagues,
I am looking for a way to test whether one regression has significant
different coefficients and overall results for 10 groups (grouping variable
is "irr").
*What I have*
The regression is:
Depend = temp + temp² + perc + perc² + conti è split up for multiple groups
of irr
On Monday, September 8, 2014 6:46 PM, Greg Snow <538...@gmail.com> wrote:
> [very good suggestions]
Thank you Greg for dedicating some time to my problem and giving
advice on how I can tackle the issue. It is very appreciated.
Unfortunately I think I will use another program for my original
pro
On Sunday, September 7, 2014 5:47 PM, peter dalgaard wrote:
> On 06 Sep 2014, at 12:24 , bonsxanco wrote:
>
> >>
> >> 1) 8th grade algebra tells me B2/B1 == 0 <==> B2 =0;
> >
> > EViews (econometrics program) doesn't have the same opinion:
> >
> > Wald test on my real model (edited):
> >
Others have discussed some of the theoretical approaches (delta
method), but as has also been pointed out, this is a mailing list
about R, not theory, so here are some approaches to your question from
the approach of those of us who like programming R more than
remembering theory.
I assume that on
On 06 Sep 2014, at 12:24 , bonsxanco wrote:
>>
>> 1) 8th grade algebra tells me B2/B1 == 0 <==> B2 =0;
>
> EViews (econometrics program) doesn't have the same opinion:
>
> Wald test on my real model (edited):
>
> * H0: B3/B2 = 0 -> F-stat = 37.82497
> * H0: B3 = 0-> F-stat = 16.31689
Scott said:
> car::deltaMethod
I said:
> I just gave a quick look and searched about delta method, but I can't
> see how it would help in testing the restrictions above.
Actually it seems that it should be the way to go: I just noticed under the
EViews Wald test window the message "Delta met
Hi.
First of all, thanks to all who have replied.
> 1) 8th grade algebra tells me B2/B1 == 0 <==> B2 =0;
EViews (econometrics program) doesn't have the same opinion:
Wald test on my real model (edited):
* H0: B3/B2 = 0 -> F-stat = 37.82497
* H0: B3 = 0-> F-stat = 16.31689
> 2) I suspect
Hi Chris,
> On Fri, Sep 5, 2014 at 7:17 PM, Chris wrote:
> Hi.
>
> Say I have a model like
>
> y = a + B1*x1 + B2*x2 + B3*x3 + B4*x4 + e
>
> and I want to test
>
> H0: B2/B1 = 0
As noted by Bert, think about this.
> or
>
> H0: B2/B1=B4/B3
>
> (whatever H1). How can I proceed?
>
> I now about ca
Well:
1) 8th grade algebra tells me B2/B1 == 0 <==> B2 =0;
2) I suspect you would need to provide more context for the other, as
you may be going about this entirely incorrectly (have you consulted a
local statistician?): your nonlinear hypothesis probably can be made
linear under the right para
parametric bootstrap test.
Just ideas. Good luck.
Søren
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Chris
Sent: 6. september 2014 04:17
To: r-h...@stat.math.ethz.ch
Subject: [R] Testing general hypotheses on regression
Hi.
Say I have a model like
y = a + B1*x1 + B2*x2 + B3*x3 + B4*x4 + e
and I want to test
H0: B2/B1 = 0
or
H0: B2/B1=B4/B3
(whatever H1). How can I proceed?
I now about car::linearHypothesis, but I can't figure out a way to do the
tests above.
Any hint?
Thanks.
C
___
There is the boxcox function in the MASS package that will look at the
Box Cox family of transformations.
On Mon, Jun 2, 2014 at 9:15 AM, Diederick Stoffers wrote:
> Hi guys,
>
> I distinctly remember having used an R toolbox that compared different
> transformation with regard to normality stat
Hi guys,
I distinctly remember having used an R toolbox that compared different
transformation with regard to normality stats in the past, can’t find anything
on google. Does anybody have a clue?
Thanks,
Diederick
__
R-help@r-project.org mailing list
Hello,
Take a look at
http://stats.stackexchange.com/questions/58772/brant-test-in-r
Hope this helps,
Rui Barradas
Em 19-05-2014 01:40, caoweina escreveu:
Dear Rose :
I saw your questions about the R function that performs brant test. Have
you worked out ? Please give some advice. I n
Dear Rose :
I saw your questions about the R function that performs brant test. Have
you worked out ? Please give some advice. I need to do the brant test in R .
Thank you very much!
sincerely!
Anna
__
R-help@r-project.o
Dear Paul
On 15 April 2014 19:23, Paul Smith wrote:
> How to test whether the correlation in the matrix of correlation of a
> two-equations SUR model fitted by package systemfit are significant?
You can use a likelihood-ratio test to compare the SUR model with the
corresponding OLS model. The on
Dear All,
How to test whether the correlation in the matrix of correlation of a
two-equations SUR model fitted by package systemfit are significant?
Thanks in advance,
Paul
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-h
Hello,
I would like to probe a significant 2-way, cross-level interaction effect
from a linear mixed effects model that I ran using nlme.
My model is as follows:
mlmmodel <- lme(fixed = RegDiseng ~ Happy + TraitHAPPYmean +
Happy*TraitHAPPYmean,
random = ~ Happy | ID, data = data, na.
On Wed, Jan 29, 2014 at 1:25 PM, ce wrote:
>
> Dear all ,
>
> xts objects give error in if command :
> Error in if :
> missing value where TRUE/FALSE needed
>
>> library(quantmod)
>> getSymbols("SPY")
>
>> SPY["2007-01-03"]$SPY.Adjusted > SPY["2007-01-04"]$SPY.Adjusted
> [,1]
>
> If
Dear all ,
xts objects give error in if command :
Error in if :
missing value where TRUE/FALSE needed
> library(quantmod)
> getSymbols("SPY")
> SPY["2007-01-03"]$SPY.Adjusted > SPY["2007-01-04"]$SPY.Adjusted
[,1]
If I use as.numeric function it works :
> SPY["2007-01-03"]$SPY.Ad
You can use the which.i argument to [.xts:
> is.null(SPY["2009-01-18",which.i=TRUE])
[1] TRUE
Best,
--
Joshua Ulrich | about.me/joshuaulrich
FOSS Trading | www.fosstrading.com
On Sat, Jan 25, 2014 at 9:27 AM, ce wrote:
> Dear all
>
>
> How to test if xts date exists ? is.null doesn't work
!length(SPY["2009-01-18"])
#[1] TRUE
!length(SPY["2009-01-16"])
#[1] FALSE
#or
!nrow(SPY["2009-01-16"])
A.K.
On Saturday, January 25, 2014 10:27 AM, ce wrote:
Dear all
How to test if xts date exists ? is.null doesn't work. SPY["2009-01-18"]
doesn't exist but I can't catch it in my scri
Dear all
How to test if xts date exists ? is.null doesn't work. SPY["2009-01-18"]
doesn't exist but I can't catch it in my script.
library(quantmod)
getSymbols("SPY")
> SPY["2009-01-16"]
SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted
2009-01-1685.8685.99 83.
Testing for bimodality is rather testing for unimodality. Hartigan and Hartigan
(1985) presented the Dip-Test which is implemented in the R package DipTest
with a much better approximation of the test distribution. If the test
statistic is too high unimodality is rejected. To estimate the dip po
Hi
I have distributions that are typically bimodal (see attached .pdf), and I
would like to test for bimodality, and then estimate the point between the two
modes, the dip in the distributions. any help would be greatly appreciated.
thanks
felix
m66.junction.aln.pairwise.histogram.pdf
Descript
Please ignore. My apologies for the noise.
cheers,
Rolf Turner
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minim
Dear R users,
I am doing custom contrasts with R (comparison of group means).
Everything works fine, but I would like to test the 3 contrasts with and
without a Welch correction for unequal variances.
I can replicate SPSS results when equal variances are assumed, but I do
not manage to test the
Dear Kathrinchen
It seems to me that your question is about statistics rather than
about R and systemfit. If you find out how the statistical test should
be conducted theoretically, I can probably advise you how to implement
the test in R (systemfit).
Best wishes,
Arne
On 11 July 2013 13:21, Ka
Hi there,
I want to ask a question about any function in r that helps test residuals of
the vector error correction model. I find it on Pfaff(2008) but he tests only
residual for VAR(vector autoregressive model).
I need to workout Portmanteau test, Normality test and Heteroskedasticty for
Dear all,
I have set up a Labour Demand Error Correction Model for some German federal
states.
As I expect the labour markets to be correlated I used a Seemingly Unrelated
Regression using systemfit in R.
My Model is:
d(emp)_it = c + alpha*ln(emp)_i,t-1 + beta_1*ln(gdp)_i,t-1 + +
beta_2*ln(wag
As this seems to be a statistics, not an R, question, it is off topic
here. Post on a statistics list like stats.stackexchange.com instead.
-- Bert
On Tue, Mar 12, 2013 at 6:22 AM, Brian Smith wrote:
> Hi,
>
> My apologies for the naive question!
>
> I have three overlapping sets and I want to f
Hi,
My apologies for the naive question!
I have three overlapping sets and I want to find the probability of finding
a larger/greater intersection for 'A intersect B intersect C' (in the
example below, I want to find the probability of finding more than 135
elements that are common in sets A, B &
Hi
I have 25 samples in my dataset. I have written a multiple regression model
and I would like to test it.
I would like to train my model on 20 samples and then test it on 5
remaining. However I would like to test the model several times, each time
using different 5 samples out of 25 and check ho
Many thanks - this was very helpful!
Regards, Kay
Am 28.01.2013 13:19 schrieb "Achim Zeileis" :
> On Sun, 27 Jan 2013, Kay Cichini wrote:
>
> That said,
>>
>> wilcox_test(x ~ factor(y), distribution = "exact")
>>>
>>
>> or the same with oneway_test, i.e would be ok?
>>
>
> Yep, exactly.
>
> And
On Sun, 27 Jan 2013, Kay Cichini wrote:
That said,
wilcox_test(x ~ factor(y), distribution = "exact")
or the same with oneway_test, i.e would be ok?
Yep, exactly.
And you could also look at chisq_test(factor(x > 0) ~ factor(y),
distribtuion = approximate()) or something like that. Or
That said,
> wilcox_test(x ~ factor(y), distribution = "exact")
or the same with oneway_test, i.e would be ok?
2013/1/27 Achim Zeileis
> On Sun, 27 Jan 2013, Kay Cichini wrote:
>
> Thanks for the reply!
>>
>> Still, aren't there issues with 2-sample test vs y and excess zeroes
>> (->many tie
On Sun, 27 Jan 2013, Kay Cichini wrote:
Thanks for the reply!
Still, aren't there issues with 2-sample test vs y and excess zeroes
(->many ties), like for Mann-Whitney-U tests?
If you use the (approximate) exact distribution, that is no problem.
The problem with the Wilcoxon/Mann-Whitney tes
Thanks for the reply!
Still, aren't there issues with 2-sample test vs y and excess zeroes
(->many ties), like for Mann-Whitney-U tests?
Kind regards,
Kay
2013/1/26 Achim Zeileis
> On Fri, 25 Jan 2013, Kay Cichini wrote:
>
> Hello,
>>
>> I'm searching for a test that applies to a dataset (N=
On Fri, 25 Jan 2013, Kay Cichini wrote:
Hello,
I'm searching for a test that applies to a dataset (N=36) with a
continuous zero-inflated dependent variable
In a regression setup, one can use a regression model with a response
censored at zero. survreg() in survival fits such models, tobit()
Hello,
I'm searching for a test that applies to a dataset (N=36) with a continuous
zero-inflated dependent variable and only one nominal grouping variable
with 2 levels (balanced).
In fact there are 4 response variables of this kind which I plan to test
seperately - the amount of zeroes ranges fr
Hi,
M1 and M2 are extreme in that all or none of the variables have
parallel lines on the logit scale. One can try fitting a partial
POM, which remains fraught (but not as much as M2) because if
the lines intersect for a particular variable where the data lie
then there will be numerical problem
I want to test whether the proportional odds assumption for an ordered
regression is met.
The UCLA website points out that there is no mathematical way to test the
proportional odds assumption (http://www.ats.ucla.edu/stat//R/dae/ologit.htm),
and use graphical inspection ("We were unable to locate
Hi Tammy,
I'm afraid this is pretty obviously homework, so we can't really do
much to help you. It's not a personal thing: just the considered
opinion of this list to believe that giving you answers (or even hefty
hints) may undermine whatever intent your teacher has in assigning the
problem. Ther
1 - 100 of 331 matches
Mail list logo