In specific cases fligner.test() can produce a small p-value even when both
groups have constant variance.
Here is an illustration:
fligner.test(c(1,1,2,2), c("a","a","b","b"))
# p-value = NA
But:
fligner.test(c(1,1,1,2,2,2), c("a","a","a","b","b","b"))
# p-value < 2.2e-16
This
Hello,
In certain cases fligner.test() returns NaN statistic and NA p-value.
The issue happens when, after centering with the median, all absolute values
become constant, which ten leads to identical ranks.
Below are a few examples:
# 2 groups, 2 values each
# issue is caused by residual values
-Killeen:med chi-squared = 0, df = 1, p-value = 1
But I am aware that other tests implemented in stats:: sometimes throw
errors in similar situations.
Maybe someone more familiar with the behaviour and philosophy behind
stats:: preferences can add more weight here?
Warm regards,
Karolis K
Thank you a lot for the update.
I understand leaving NaN/NA in these cases, that can make sense.
But feels to me that this situation could maybe produce a warning, to inform
the user of what had happened?
Kind regards,
Karolis K.
> On Jan 24, 2021, at 6:52 PM, Kurt Hornik wr
.time(anyNA(x))
#user system elapsed
# 0.001 0.000 0.001
Warm reg
w",
1:10), paste("col", 1:2)), class="mm")
m@row1
However, seems like currently it does not support autocompletion.
Wouldn’t it make sense to add a method like .EtaNames() which would provide tab
autocompletions for x@ in the same way cur