On Thu, Mar 20, 2014 at 1:28 PM, Ted Byers <r.ted.by...@gmail.com> wrote:
>
> Herve Pages mentions the risk of irreproducibility across three minor
> revisions of version 1.0 of Matrix.  My gut reaction would be that if the
> results are not reproducible across such minor revisions of one library,
> they are probably just so much BS.
>

Perhaps this is just terminology, but what you refer to I would generally
call 'replication'. Of course being able to replicate results with other
data or other software is important to validate claims. But being able to
reproduce how the original results were obtained is an important part of
this process.

If someone is publishing results that I think are questionable and I cannot
replicate them, I want to know exactly how those outcomes were obtained in
the first place, so that I can 'debug' the problem. It's quite important to
be able to trace back if incorrect results were a result of a bug,
incompetence or fraud.

Let's take the example of the Reinhart and Rogoff case. The results
obviously were not replicable, but without more information it was just the
word of a grad students vs two Harvard professors. Only after reproducing
the original analysis it was possible to point out the errors and proof
that the original were incorrect.

        [[alternative HTML version deleted]]

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to