Well, clearly any conclusions about the significance of outlier input
samples on codec performance must be tempered by the likelihood of these
outlier samples occurring in real life, and simple averaging may not be a
useful metric. I'm assuming that this will be taken into account in the
Qualinet statistical analysis mentioned by Christian.

....Paul

>-----Original Message-----
>From: Jean-Marc Valin [mailto:[email protected]]
>Sent: Tuesday, April 23, 2013 5:47 PM
>To: Paul Coverdale
>Cc: [email protected]; [email protected]
>Subject: Re: [codec] Audio tests: Further steps
>
>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA1
>
>On 04/23/2013 05:34 PM, Paul Coverdale wrote:
>> I don't know why you're pouring scorn on this exercise, Ron. It seems
>> to me that it is a bona-fide attempt to understand the strengths and
>> weaknesses of the Opus codec in a controlled, unbiased manner, what a
>> characterisation test should do. It should have been done as part of
>> the IETF codec WG activity, but better late than never.
>
>It's indeed a way to see what the strengths and weaknesses of a codec
>are. I think what Ron mostly meant is that any *average* you compute on
>such test would not be representative of which codec is better than the
>other. Essentially, carefully picking out-liars is the worst form of
>sampling you can have. It's useful for developers (knowing what to focus
>on assuming you don't already know), but not for making general quality
>conclusions.
>
>       Jean-Marc


_______________________________________________
codec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/codec

Reply via email to