Perhaps you can do something along the following lines to get around
this limitation:
#
# parameterizes the original function by delta, size.
def parameterized_function(delta, size, function):
center = (size-1)/2.0
return lambda i: function( (i-center)*delta )
# the func
The problem, as I understand it, is this:
you have a large array and you want to define objects that (1) behave
like arrays; (2) are derived from the large array (can be computed
from it); (3) should not take much space if only small portions of the
large array are ever referenced.
A simple solut
My understanding of the root of the problem is that you end up with
doing many evaluations of sinc. If this is so, one suggestion is to go
with pre-computed filters. For example, if you are resampling from 9
points to 10, essentially you're trying to go from a function that is
defined on points 0,
Hello everyone!
I wonder if support for generic N-dimensional arrays has been added to
NumPy since January 2003 (the last time this question was asked in
this newsgroup). If not, is there some interest in trying to add this
data structure to NumPy? It may not seem very useful for scientific
comput