It would be easier to diagnose the problem if you included an example
illustrating exactly what you did. I'll guess:
> a <- list(3,4,5)
> mean(a)
[1] NA
Warning message:
In mean.default(a) : argument is not numeric or logical: returning NA
> mean(as.numeric(a))
[1] 4
But that's just a guess, as
s me",
I have no clue why you think it does! It is a curiosity.
albyn
On Thu, May 23, 2013 at 04:38:18PM +, Nordlund, Dan (DSHS/RDA) wrote:
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> > project.org] On Behalf Of Alby
After a bit of playing around, I discovered that
sample() does something similar in other situations:
set.seed(105021)
sample(1:5,1,prob=c(1,1,1,1,1))
[1] 3
set.seed(105021)
sample(1:5,1)
[1] 2
set.seed(105021)
sample(1:5,5,prob=c(1,1,1,1,1))
[1] 3 4 2 1 5
set.seed(105021)
sample(1:5,5)
I once had a discussion with an economist who told me
in almost these exact words:
"I don't care what the data say, the theory is so clear".
albyn
On 2013-04-26 9:30, William Dunlap wrote:
The prior for the incompetence/malice question is usually best set
pretty heavily in
favour of incompete
w.functionaldiversity.org
>
> ______
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-con
abble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproduci
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.e
I stayed out of this one thinking it was probably a homework exercise.
After others have responded, I'll go ahead with my gloss on
Bill's function...
The specific problem is really of the form
exp(a) - exp(a+eps) = exp(a)*(1-exp(eps))
So even though we can't compute exp(1347), we can
compu
Dear Cat
My apologies for presuming...
Here's a "primitive" solution: compute a t-statistic or CI.
t = (beta-hat - 1)/SE(beta-hat), compare to qt(.975, res.df)
Or Better, compute the 95% confidence interval
beta-hat + c(-1,1)*qt(.975, res.df)*SE(beta-hat)
albyn
On 2012-11-24 18:05, Catri
!!
>
> Cat
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and
tps://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-hel
; Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, m
More links on reproducible research:
Opinion: Open and Free: Software and Scientific Reproducibility
Seismological Research Letters
Volume 83 · Number 5 · September/October 2012
Reproducible Research in Computational Science
Roger D. Peng
Science 2 December 2011: 1226-1227.
albyn
On 2012-10-30
Have you looked at aquamacs? (emacs for the mac).
its at aquamacs.org.
albyn
On 2012-09-26 17:48, Steven Wolf wrote:
Hi everyone,
I've recently moved from using a windows machine to a Mac (some might
call it an upgrade, others not…I'll let you be the judge). Once I
started using Notepad ++ on
TEC/UFPA/PPGEM/GPEMAT
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
&g
fects models.
> Depends: graphics, stats, R (>= 2.13)
> Imports: lattice
> Suggests: Hmisc, MASS
> LazyLoad: yes
> LazyData: yes
> License: GPL (>= 2)
> BugReports: http://bugs.r-project.org
> Packaged: 2012-05-23 07:28:59 UTC; ripley
> Repository: CRAN
> Da
istinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing list
y that I can construct this so that my
> array is exactly as long as the number of spots I need to reach my threshold
> value?
>
> Thanks,
>
> -Steve
>
> -Original Message-
> From: Albyn Jones [mailto:jo...@reed.edu]
> Sent: Tuesday, April 10, 2012 11:46 AM
&g
ad the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mail
Robin Lock at St Lawrence has done this for hockey, see
http://it.stlawu.edu/~chodr/faq.html
As I recall, he has a poisson regression model with parameters for
offense and defense, and perhaps home 'field' advantage.
I confess I am skeptical that this is the right approach for football
- te
FWIW, the integral of a mixture density is the same mixture of the
CDFs, so you can use the pbeta functions:
pcustom <- function(x) (pbeta(x,2,6) + pbeta(x,6,2))/2
albyn
Quoting Gerhard :
Am Dienstag, 3. Januar 2012, 19:51:36 schrieb Prof. Dr. Matthias Kohl:
D <- AbscontDistribution(d = f
right. replace dbetas with pbetas.
albyn
Quoting Duncan Murdoch :
On 03/01/2012 1:33 PM, Albyn Jones wrote:
What do quantiles mean here? If you have a mixture density, say
myf<- function(x,p0) p0*dbeta(x,2,6) + (1-p0)*dbeta(x,6,2)
then I know what quantiles mean. To find the
What do quantiles mean here? If you have a mixture density, say
myf <- function(x,p0) p0*dbeta(x,2,6) + (1-p0)*dbeta(x,6,2)
then I know what quantiles mean. To find the Pth quantile use uniroot
to solve for the x such that myf(x,p0) - P =0.
albyn
Quoting VictorDelgado :
Gerhard wrot
Taral
The general problem of finding subgraphs with a given structure ("motifs")
is hard (ie computationally expensive). There is some literature...
have you looked at the package igraph, function graph.motifs()?
albyn
Quoting Taral :
I have the adjacency matrix of a graph. I'm trying to fi
istinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing lis
and show them on the county FIPS level?
> >
> >"breakdown" suggests a factor construct. If so, then :
> >
> >?interaction
> >
> >But the "show" part of the question remains very vague.
> >
> > Can
[[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide comment
gt; https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
___
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@ree
in the expected direction, I think you
> can just leave out the multiplication by 2 and get the right answer ...
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.
___
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible
gt; adding to my collection.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
&g
p@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
___
e test is a moot point!
>
> David Cross
> d.cr...@tcu.edu
> www.davidcross.us
>
>
>
>
> On Apr 18, 2011, at 5:14 PM, Albyn Jones wrote:
>
> > First, note that you are doing two separate power calculations,
> > one with n=2 and sd = 1.19, the other wit
p
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing list
https://stat.ethz.
se as well.
albyn
Quoting Albyn Jones :
Hi Kristian
The obvious approach is to treat it like any other MLE problem: evaluation
of the log-likelihood is done as often as necessary for the
optimizer you are using: eg a call to optim(psi,LL,...) where
LL(psi) evaluates the log likelihood at psi.
Hi Kristian
The obvious approach is to treat it like any other MLE problem: evaluation
of the log-likelihood is done as often as necessary for the optimizer
you are using: eg a call to optim(psi,LL,...) where LL(psi) evaluates
the log likelihood at psi. There may be computational shortcuts
Presumably the null hypothesis is that at least one of the differences
is larger in absolute magnitude than the chosen epsilon. I expect that
your procedure would be conservative: if it rejects the null
hypothesis, then you are ok, but presumably what you really want would
be based on a joint
al problem.
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
> > -Original Message-
> > From: Albyn Jones [mailto:jo...@reed.edu]
> > Sent: Sunday,
testing the null hypothesis of no interaction is not the same as a
test of equivalence for the two differences. There is a literature on
tests of equivalence. First you must develop a definition of
equivalence, for example the difference is in the interval (-a,a).
Then, for example, you
On Thu, Dec 02, 2010 at 06:23:45PM -0500, Ravi Varadhan wrote:
> A simple solution is to locate the mode of the integrand, which should be
> quite easy to do, and then do a coordinate shift to that point and then
> integrate the mean-shifted integrand using `integrate'.
>
> Ravi.
Translation: t
4_2.10.1 tools_2.10.1
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
sting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
P
__
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo.
://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do rea
ct.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Albyn Jones
Reed College
jo...@reed.edu
__
version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-
The transition matrix is a collection of conditional distributions.
it would seem natural to compute the entropy of the stationary
distribution.
albyn
Quoting "Wilson, Andrew" :
Does anyone have any "R" code for computing the entropy of a simple
first or second order Markov chain, given a
Take a look at Sage, which is an open source alternative. It already
integrates R (http://www.sagemath.org)
albyn
Quoting David Bickel :
What are some effective ways to leverage the strengths of R and
Mathematica for the analysis of a single data set?
More specifically, are there an
Paul
The FIFA database doesn't have times that goals are scored either.
The best I have found is at http://www.worldcup-history.com/, but you
have to check individual match reports for the times that goals are
scored.
albyn
On Mon, Jun 07, 2010 at 09:50:34PM +0100, Paul wrote:
> Hello,
>
> Sorry
Sums of correlated increments have the same correlation as the original
variables...
library(mvtnorm)
X<- matrix(0,nrow=1000,ncol=2)
for(i in 1:1000){
Y <- rmvnorm(1000,mean=mu,sigma=S)
X[i,] <- apply(Y,2,sum)
}
cor(Y)
[,1] [,2]
[1,] 1.000 0.4909281
[2,] 0.4909281 1.0
Note: this procedure assumes that all clusters have the same covariance matrix.
albyn
On Wed, Mar 03, 2010 at 01:23:37PM -0800, Phil Spector wrote:
> The manhattan distance and the Mahalanobis distances are quite different.
> One of the main differences is that a covariance matrix is necessary
ngandhealth/People/Faculty_personal_pages/Varadhan.h
> > tml<http://www.jhsph.edu/agingandhealth/People/Faculty_personal_pages/Varadhan.h%0Atml>
> >
> >
> >
> >
> >
> >
s doing?
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
> > -Original Message-
> > From: Albyn Jones [mailto:jo...@reed.edu]
> > Sent: Tuesday, January 12, 201
Greg
Nice problem: I wasted my whole day on it :-)
I was explaining my plan for a solution to a colleague who is a
computer scientist, he pointed out that I was trying to re-invent the
wheel known as dynamic programming. here is my code, apparently it is
called "bottom up dynamic programming".
Is the Lorentz distribution another name for the Cauchy distribution
with density
f(x) = \frac{1}{\pi} \frac{1}{1+x^2}
possibly with location and scale parameters?
albyn
On Mon, Dec 21, 2009 at 05:04:05PM +0100, Markus Häge wrote:
> Hi there,
>
> I was looking for a Lorentz-distribution in
k -> infinity gives the normal distribution. You probably don't care
much about the difference between k=1000 and k=10, so you might
try reparametrizing df on [1,infinity) to a parameter on [0,1]...
albyn
On Thu, Dec 10, 2009 at 02:14:26PM -0600, Barbara Gonzalez wrote:
> Given X1,...,Xn ~ t
oops, I see the answer in your question: 15...
Quoting Albyn Jones :
what is the dimension of your data?
you might try projecting the points into planes defined by 3 cluster centers.
plot, for each cluster, a density plot or histogram of distances to
the cluster center, and perhaps overlay
what is the dimension of your data?
you might try projecting the points into planes defined by 3 cluster centers.
plot, for each cluster, a density plot or histogram of distances to
the cluster center, and perhaps overlay the density curve for points
not in the cluster.
albyn
Quoting Iuri
If the matrices are not all the same size, then the order of
computation will make a difference. simple example: A is 1xn, B is
nx1, C is 1xn.
A(BC) takes n^3 multiplies, while (AB)C requires 2n.
albyn
Quoting Todd Schneider :
Hi all,
I am looking for a function like cumprod() that work
There is a dwt() in package:waveslim, reading the help file:
dwt(x, wf="la8", n.levels=4, boundary="periodic")
wf: Name of the wavelet filter to use in the decomposition. By
default this is set to '"la8"', the Daubechies orthonormal
compactly supported wavelet of length L=8
--
> ---
>
> Ravi Varadhan, Ph.D.
>
> Assistant Professor, The Center on Aging and Health
>
> Division of Geriatric Medicine and Gerontology
>
> Johns Hopkins University
>
> Ph: (410) 502-2619
>
> Fax: (410) 614-9625
>
> Email: rvarad...@jhmi.edu
I just tried the following shot in the dark:
generate an N by N stochastic matrix, M. I used
M = matrix(runif(9),nrow=3)
M = M/apply(M,1,sum)
e=eigen(M)
e$values[2]= .7 (pick your favorite lambda, you may need to fiddle
with the others to guarantee this is second largest
Quoting David Winsemius :
In insurance situation there is typically a cap on the covered
losses and there is also typically an amount below which it would
not make sense to offer a policy. So a minimum and a maximum are
sensible assumptions about loss distributions in may real modeling
s
can I
assume A = min(amounts) and B = max(Amounts)
Regards
Maithili
--- On Wed, 7/10/09, Albyn Jones wrote:
From: Albyn Jones
Subject: Re: [R] Parameters of Beta distribution
To: jlu...@ria.buffalo.edu
Cc: "Maithili Shiva" ,
r-help@r-project.org, r-help-boun...@r-project.org
D
Are A and B known? That is, are there known upper and lower bounds
for this credit loss data? If not, you need to think about how to
estimate those bounds. Why do you believe the data have a beta distribution?
albyn
On Wed, Oct 07, 2009 at 09:03:31AM -0400, jlu...@ria.buffalo.edu wrote:
> Res
scaling changes the metric, ie which things are close to each other.
there is no reason to expect the picture to look the same when you
change the metric.
On the other hand, your two pictures don't look so different to me.
It appears that the scaled plot is similar to the unscaled plot, with
the
It appears that your difficulty lies in miscounting the number of intervals.
cut(NP, breaks=c(0,1,2,3,4,max(NP)))
[1] (0,1] (0,1] (1,2] (0,1] (0,1] (1,2] (1,2] (0,1] (3,4] (0,1]
(4,6] (2,3] (2,3] (0,1]
[16] (4,6] (2,3] (4,6] (0,1] (4,6] (0,1] (1,2] (1,2] (1,2] (3,4] (3,4]
(0,1] (1,2] (0,1
Your data will have all sorts of patterns (diurnal, seasonal) in
addition to long term trend. I'd start by smoothing out the cyclic
patterns with loess or gam, then use a secant approximation to the
slope on the smoothed series.
albyn
On Fri, Jul 24, 2009 at 06:13:00PM +0530, Yogesh Tiwari wro
I don't think you want assign() here.
> x1 = rnorm(20)
> min(x1)
[1] -0.9723398
> min(eval(paste("x",1,sep=""))) # not the solution
[1] "x1"
> min(eval(as.name(paste("x",1,sep="" # a solution
[1] -0.9723398
try:
for(i in 1:27) {
xener[i] <- min(eval(as.name((paste("sa",i,sep="")
It sounds like you might want to draw the convex hull for each group
of points. There is a package called "chplot" which appears to
do this, though I haven't used it...
albyn
On Thu, Jul 16, 2009 at 06:23:54PM +0100, Ahmed, Sadia wrote:
> Hi,
>
> I'll try to keep my question brief and to the p
As you seem to be aware, the matrix is poorly conditioned:
> kappa(PLLH,exact=TRUE)
[1] 115868900869
It might be worth your while to think about reparametrizing.
albyn
On Wed, Jun 17, 2009 at 11:37:48AM -0400, avraham.ad...@guycarp.com wrote:
>
> Hello.
>
> I am trying to invert a matrix, and
On Wed, Apr 22, 2009 at 08:26:51PM -0700, Ben Bolker wrote:
>
> ??? octave is a Matlab clone, not a Mathematica clone (I would be
> interested in an open source Mathematica clone ...) ???
>
You might take a look at Sage. It is not a mathematica clone, but
open source mathematical software
That's an interesting problem.
My first thought was to choose the closest positive definite matrix to
the given matrix, say in the least squares sense. However, the
2x2 diagonal matrix with diagonal (1,0) makes it clear that there
isn't a closest pd symmetric matrix.
Perhaps multiple imputatio
try Cholesky() in package Matrix.
albyn
On Tue, Mar 10, 2009 at 02:33:01PM -0700, Manli Yan wrote:
> Hi everyone:
> I try to use r to do the Cholesky Decomposition,which is A=LDL',so far I
> only found how to decomposite A in to LL' by using chol(A),the function
> Cholesky(A) doesnt work,any
Look at the data structure produced by summary()
names(summary(lm.D9))
[1] "call" "terms" "residuals" "coefficients"
[5] "aliased" "sigma" "df""r.squared"
[9] "adj.r.squared" "fstatistic""cov.unscaled"
Now look at the data structure for the
I also received the spam, and have not registered for the conference.
I decided to do the noble experiment, and used their web interface to
unsubscribe to the newsletter to which they claim I had subscribed,
and for good measure added my name to their "do not contact" list.
albyn
On Tue, Feb 24,
The computation 2*sum(dbinom(c(10:25),25,0.061)) does not correspond
to any reasonable definition of p-value. For a symmetric
distribution, it is fine to use 2 times the tail area of one tail.
For an asymetric distribution, this is silly.
The standard definition given in elementary texts is usual
dip {diptest} is Hartigan's dip test.
albyn
On Tue, Feb 03, 2009 at 05:42:34PM -0500, Andrew Yee wrote:
> I'm not sure where to begin with this, but I was wondering if someone could
> refer me to an R package that would test to see if a distribution fits a
> bimodal distribution better than a uni
it is easy to make a qqplot for the gamma; suppose that the sample parameters
are 1.101 and 2.49, the data in x:
plot(qgamma(ppoints(x),1.101,2.49),sort(x))
see also lattice:qqmath
albyn
Quoting Dan31415 :
Ah yes, that does produce a nice plot. Can i just ask what exactly it is
sho
Yes, computing WB.%*%t(WB) may be the problem, by either method.
if the goal is to compute the inverse of WB%*%t(WB), you should
consider computing the singular value or QR decomposition for the
matrix WB.
If WB = Q%*%R, where Q is orthogonal, then WB %*% t(WB) =R %*%t(R),
and the inverse of
You are plotting the histogram in the frequency scale. A quick look
at the doc page for hist() would reveal the freq option:
hist(x,freq=FALSE)
then you can add the densities with lines()
albyn
Quoting Xin Shi :
Dear:
I am trying to plot the histogram graph for my observed data.
the empirical distribution gives probability 1/n to each of n observations.
rather than sampling the unit interval, just resample the dataset.
If x is your dataset, and you want an independent sample of size k,
sample(x,size=k,replace=TRUE)
albyn
On Tue, Jan 06, 2009 at 02:39:17PM -0800,
One can't tell for sure without seeing the function, but I'd guess
that you have a numerical issue. Here is an example to reflect upon:
f=function(x) (exp(x)-exp(50))*(exp(x)+exp(50))
uniroot(f,c(0,100))
$root
[1] 49.7
$f.root
[1] -1.640646e+39
$iter
[1] 4
$estim.prec
[1] 6.103516e-
On Fri, Dec 19, 2008 at 05:44:47AM -0800, kdebusk wrote:
> I have data sets from three different sites. None of the data sets are
> normally distributed and can not be log-transformed to a normal
> distribution. I would like to compare the data sets to see if any of
> the sites are similar to each
rle(x) gives the run length encoding of x.
rle(x>0) or rle(sign(x)) will do this for positive and negative values of x.
albyn
On Mon, Dec 08, 2008 at 03:24:50PM +, [EMAIL PROTECTED] wrote:
> Dear R Users,
>
> Is there a package or some functionality in R which returns statistics on
> "runs
86 matches
Mail list logo