Martin Ünsal gmail.com> writes:
>
> I was wondering if anyone has thought about accelerating NumPy with a
> GPU. For example nVidia's CUDA SDK provides a feasible way to offload
> vector math onto the very fast SIMD processors available on the GPU.
> Currently GPUs primarily support single preci
27;s lab computers. On
my home desktop, which has Ubuntu Feisty installed, using the Feisty
repository's python-numpy package and gfortran the same Fortran code
compiles fine with f2py. Any ideas what the problem is?
Thanks,
Andrew Corrigan
___
N
Good point! I think I will, Thanks a lot.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Anne Archibald gmail.com> writes:
> Vectorizing apply is what you're looking for, by the sound of it:
> In [13]: a = array([lambda x: x**2, lambda x: x**3])
>
> In [14]: b = arange(5)
>
> In [15]: va = vectorize(lambda f, x: f(x))
>
> In [16]: va(a[:,newaxis],b[newaxis,:])
> Out[16]:
> array([
Robert Kern gmail.com> writes:
>
> Shane Holloway wrote:
> > To the vector-processing masters of numpy!
> >
> > I'm wanting to optimize calling a list (or array) of callable
> > objects. Consider the following:
> >
> > vCallables = numpy.array([ > classes, builtin functions>])
> > vParam1
I'm confused about the following:
>>> print mgrid[2.45:2.6:0.05, 0:5:1]
[[[ 2.45 2.45 2.45 2.45 2.45]
[ 2.5 2.5 2.5 2.5 2.5 ]]
[[ 0.1.2.3.4. ]
[ 0.1.2.3.4. ]]]
>>> print mgrid[2.45:2.6:0.05]
[ 2.45 2.5 2.55]
In the first case in the first dime