On Sat, 13 Jan 2024 16:54:01 -0800
Bert Gunter wrote:
> Well, this would seem to work:
>
> e <- data.frame(Score = Score
> , Country = factor(Country)
> , Time = Time)
>
> ncountry <- nlevels(e$Country)
> func= function(dat,idx) {
>if(length(unique(dat[idx,'Count
Well, this would seem to work:
e <- data.frame(Score = Score
, Country = factor(Country)
, Time = Time)
ncountry <- nlevels(e$Country)
func= function(dat,idx) {
if(length(unique(dat[idx,'Country'])) < ncountry) NA
else coef(lm(Score~ Time + Country,data = dat[idx,]
It took me a little while to figure this out, but: the problem is
that if your resampling leaves out any countries (which is very likely),
your model applied to the bootstrapped data will have fewer coefficients
than your original model.
I tried this:
cc <- unique(e$Country)
func <- functio
Dear Duncan,
Dear Ivan,
I really thank you a lot for your response.
So, if I correctly understand your answers the problem is coming from this line:
coef(lm(Score~ Time + factor(Country)),data=data[idx,])
This line should be:
coef(lm(Score~ Time + factor(Country),data=data[idx,]))
If yes, now I
В Sat, 13 Jan 2024 20:33:47 + (UTC)
varin sacha via R-help пишет:
> coef(lm(Score~ Time + factor(Country)),data=data[idx,])
Wrong place for the data=... argument. You meant to give it to lm(...),
but in the end it went to coef(...). Without the data=... argument, the
formula passed to lm() p
On 13/01/2024 3:33 p.m., varin sacha via R-help wrote:
Score=c(345,564,467,675,432,346,476,512,567,543,234,435,654,411,356,658,432,345,432,345,
345,456,543,501)
Country=c("Italy", "Italy", "Italy", "Turkey", "Turkey", "Turkey",
"USA", "USA", "USA", "Korea", "Korea", "Korea", "Portugal", "Po
Dear R-experts,
Here below, my R code working BUT I get a strange result I was not expecting!
Indeed, the 95% percentile bootstrap CIs is (-54.81, -54.81 ). Is anything
going wrong?
Best,
##
Score=c(345,564,467,675,432,346,476,512,567,543,234,435,654,411
This looks like R FAQ 7.31, the one so commonly asked that regulars on
this list have its number memorized!
> tau<-seq(0.02,0.98,0.02)
> match(round(0.12, 2), round(tau, 2))
[1] 6
> match(round(0.16, 2), round(tau, 2))
[1] 8
See also ?all.equal
Sarah
On Mon, Nov 4, 2013 at 1:29 PM, Timo Schmid
R FAQ 7.31 .
-- Bert
On Mon, Nov 4, 2013 at 10:29 AM, Timo Schmid wrote:
> Hello,
> I want to match specific numbers and a vector. Therefore I use the match or
> which function but I get unreasonable results. Has anybody an idea why I got
> a NA inmatch(0.12, tau)Please see the code below:
>
>
Hello,
I want to match specific numbers and a vector. Therefore I use the match or
which function but I get unreasonable results. Has anybody an idea why I got a
NA inmatch(0.12, tau)Please see the code below:
> tau<-seq(0.02,0.98,0.02)
> tau
[1] 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.
I suggest you review your stat101 text:
A cdf is between 0 and 1, not a pdf, which is a **density** function.
> dnorm(0, sd=.01)
[1] 39.89423
-- Bert
On Sat, Oct 26, 2013 at 11:31 AM, Noah Silverman wrote:
> Hello,
>
> I’m seeing some strange behavior from the dbeta() function in R.
>
> For e
Hello,
I’m seeing some strange behavior from the dbeta() function in R.
For example:
> dbeta(0.0001, .4, .6 )
[1] 76.04555
How is it possible to get a PDF that is greater than 1??
Am I doing something wrong here, or is this a quirk of R.
Thanks,
--
Noah Silverman, M.S., C.Phil
UCLA Departme
Dear R users,
i want to check matrices if they are identical when i change the rows or the
columns or the signs of the one or more columns
isomorphic <- function (m1, m2) {
combs.c <- combn(ncol(m1), 2)
nc <- ncol(combs.c)
ind.c <- vector("logical", nc)
for (i in 1:nc) {
, 2010 11:57 AM
To: R help
Subject: [R] Strange results from Multivariate Normal Density
Hello,
I'm using dmnorm from the package {mnormt} and getting strange results.
First, according to the documentation, dmnorm should return a vector of
densities, and I'm only getting one value returned
Hello,
I'm using dmnorm from the package {mnormt} and getting strange results.
First, according to the documentation, dmnorm should return a vector of
densities, and I'm only getting one value returned (which is what I would
expect). I've been interpreting this as the joint density of all values
On 18/01/2010, at 9:02 AM, Joshua Wiley wrote:
Hello Alan,
Following up on Gabor's suggestion, if you have different packages
loaded, one of those packages could have a function
Object?
'diff' that would not show up with a call to ls() so you could also
try
search()
which will
Hello Alan,
Following up on Gabor's suggestion, if you have different packages
loaded, one of those packages could have a function 'diff' that would
not show up with a call to ls() so you could also try
search()
which will show you what packages are loaded.
HTH,
Joshua
On Sun, Jan 17, 2010 a
Its likely that in one of your sessions you do have a diff and in the
other you don't and this has nothing to do with XP vs. Windows 7. In
the session with no diff, the only diff around is the function, diff,
and you can't subscript a function.
Try this to see what variables are in your workspace
Hello,
I am a newbie.
I can run the following code stored in "test.txt" without error using my
XP machine:
x <- scan("C:\\Rwork\\A.txt")
x10 = filter(x, rep(1/10,10), sides=1)
x
x10
for(i in 10:length(x)){
if (x[i] > x10[i]) diff[i]="b" else diff[i]="s"
}
However, if I run it in another PC
--- On Tue, 9/8/09, Chunhao Tu wrote:
> From: Chunhao Tu
> Subject: [R] strange results in summary and IQR functions
> To: r-help@r-project.org
> Received: Tuesday, September 8, 2009, 11:09 AM
>
> Dear R users,
> Something is strange in summary and IQR. Suppose, I have
rik Iverson
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Chunhao Tu
Sent: Tuesday, September 08, 2009 10:09 AM
To: r-help@r-project.org
Subject: [R] strange results in summary and IQR functions
Dear R users,
Something i
Dear R users,
Something is strange in summary and IQR. Suppose, I have a data set and I
would like to find the Q1, Q2, Q3 and IQR.
x<-c(2,4,11,12,13,15,31,31,37,47)
> summary(x)
Min. 1st Qu. MedianMean 3rd Qu.Max.
2.00 11.25 14.00 20.30 31.00 47.00
> IQR(x)
[1] 19.75
[EMAIL PROTECTED] wrote:
> I'm running lmer repeatedly on artificial data with two fixed factors (called
> 'gender' and 'stress') and one random factor ('speaker'). Gender is a
> between-speaker variable, stress is a within-speaker variable, if that
> matters.
> Each dataset has 100 rows from eac
I'm running lmer repeatedly on artificial data with two fixed factors (called
'gender' and 'stress') and one random factor ('speaker'). Gender is a
between-speaker variable, stress is a within-speaker variable, if that matters.
Each dataset has 100 rows from each of 20 speakers, 2000 rows in all.
Dear Gustaf,
I can think of two reasons why the two tests can disagree.
First, the t-test from the summary() output is based on the covariance
matrix of the coefficients, while the F-test in the anova() output is
based of fitting alternative models. The two are not in general the
same.
Second,
Hi,
I have been struggling with this problem for some time now. Internet,
books haven't been able to help me.
## I have factorial design with counts (fruits) as response variable.
> str(stubb)
'data.frame': 334 obs. of 5 variables:
$ id : int 6 23 24 25 26 27 28 29 31 34 ...
$ infl.treat : Facto
On 16/10/2007, at 8:30 AM, pintinho wrote:
>
> Hi,
>
> I am getting a strange result while converting a string vector into
> numeric
> vector:
>
>> Datas[1]
> [1] 37315
>
>> as.numeric(Datas[1])
> [1] 2
>
> Can anyone help me??
It would seem that ``Datas'' is a ***factor*** and NOT a ``string
Hi,
I am getting a strange result while converting a string vector into numeric
vector:
> Datas[1]
[1] 37315
> as.numeric(Datas[1])
[1] 2
Can anyone help me??
--
View this message in context:
http://www.nabble.com/Strange-results-converting-string-to-number-tf4629811.html#a13220089
Sent from
28 matches
Mail list logo