Hi everybody,

First of all, thanks for pygsl.
I have to perform linear interpolation in a large array (in the order of 
magnitude of million of points). I have tried to use scipy, but it seems that 
its linear interpolation has no "accelerator", that is to say the ability to 
remind the former searched index when the point to interpolate is next to the 
former, so as to avoid if possible a complete search at each interpolation.
But I know that GSL has such an accelerator, that is why I have looked at 
pygsl.
Here is a small program:

######################
import pygsl.interpolation
from time import time
import numpy

t_initial = time()

a=numpy.arange(0,10000000)
b=numpy.arange(10000000,20000000)

c = pygsl.interpolation.linear( len(a) )
pygsl.interpolation.linear.init( c
      , a
      , b)

for i in range(0,50):
   print "interpolation=", c.eval(50000)
t_final = time()

print "passed_time=", t_final-t_initial
#####################

The execution of this program takes about 15 seconds on my machine. I do not 
understand why it is so long. I have checked that the accelerator is used by 
GSL (by hacking the C code).
So where is the time consumed?
I do not think that the time could be passed in the only code executed in GSL:

 x_lo = x_array[index];
  x_hi = x_array[index + 1];
  y_lo = y_array[index];
  y_hi = y_array[index + 1];
  dx = x_hi - x_lo;
  if (dx > 0.0)
    {
      *y = y_lo + (x - x_lo) / dx * (y_hi - y_lo);
      return GSL_SUCCESS;
    }

because index is known from the first interpolation: afterwards, the 
accelerator is used.

So where is the time passed? I have tried to profile the program:

$ /usr/lib/python2.5/profile.py     test_pygsl.py

It is a bit difficult to analyse the output of profile for me, I obtain the 
following line:

50   13.109    0.262   13.109    0.262 gslwrap.py:1833(gsl_interp_eval)

#### COMPARISON with native C/GSL

I have tried the same test, but written directly in C and GSL:
(to be compiled with:
 gcc -g -Wall -lgsl -lgslcblas test.c -o test

#######################
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_interp.h>

int main (void)
{
   int i;
   int size = 10000000;
   double xi = 50000, yi;
   double * x;
   double * y;

   x = malloc( sizeof * x * size );
   y = malloc( sizeof * y * size );
   if ( ! x || ! y )
   {
      exit(1);
   }

   for ( i = 0; i < size; i++ )
   {
      x[i] = i;
      y[i] = i + size;
   }

   gsl_interp_accel *acc
      = gsl_interp_accel_alloc ();
   gsl_interp * linear
      = gsl_interp_alloc ( gsl_interp_linear, size );

   gsl_interp_init( linear, x, y, size );

   for ( i = 0; i < 50; i++ )
   {
      yi = gsl_interp_eval (linear, x, y, xi, acc);
      printf ("interpolation=%.1f\n", yi);
   }
   gsl_interp_free( linear );
   gsl_interp_accel_free( acc );
   return 0;
}
##################

The execution is instantaneous. So where is the time passed in the Python 
version?

Thanks a lot

Julien

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
pygsl-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pygsl-discuss

Reply via email to