On Wed, Aug 23, 2017 at 10:11:50AM +0200, Tomas Härdin wrote:
> On 2017-08-22 03:23, Tyler Jones wrote:
> > +
> > +/**
> > + * Calculate the variance of a block of samples
> > + *
> > + * @param in Array of input samples
> > + * @param length Number of input samples being analyzed
> > + * @return The variance for the current block
> > + */
> > +static float variance(const float *in, int length, AVFloatDSPContext *fdsp)
> > +{
> > + int i;
> > + float mean = 0.0f, square_sum = 0.0f;
> > +
> > + for (i = 0; i < length; i++) {
> > + mean += in[i];
> > + }
> > +
> > + square_sum = fdsp->scalarproduct_float(in, in, length);
> > +
> > + mean /= length;
> > + return (square_sum - length * mean * mean) / (length - 1);
> > +}
>
> Isn't this method much more numerically unstable compared to the naïve
> method? Might not matter too much when the source data is 16-bit, but
> throwing it out there anywayThis does have the possibility of being more unstable than the naive version. However, I have not been able to find a sample file where it is even close to influential. The epsilon constant added during comparison between variances has a much greater impact. A quick run of the same samples through python was able to verify this. > DSP methods for computing mean and variance could be a good project for > someone wanting to learn > > /Tomas I am unsure of how many codecs use direct calculation of statistical values. Perhaps someone with more experience than myself could comment on the usefulness of such methods. I appreciate your comments, Tyler Jones
signature.asc
Description: PGP signature
_______________________________________________ ffmpeg-devel mailing list [email protected] http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
