sigh; yet another email dropped by the list.
David Warde-Farley wrote:
> On 21-Oct-09, at 9:14 AM, Pauli Virtanen wrote:
>
>> Since these are ufuncs, I suppose the SSE implementations could just
>> be
>> put in a separate module, which is always compiled. Before importing
>> the
>> module, we
Is anyone with this problem *not* running ubuntu?
Me - RHEL 5.2 opteron:
Python 2.6.1 (r261:67515, Jan 5 2009, 10:19:01)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Fedora 9 PS3/PPC:
Python 2.5.1 (r251:54863, Jul 17 2008, 13:25:23)
[GCC 4.3.1 20080708 (Red Hat 4.3.1-4)] on linux2
A
Charles R Harris wrote:
> Depends on the CPU, FPU and the compiler flags. The computations could very
> well be done using double precision internally with conversions on
> load/store.
Sure, but if this is the case, why is the performance blowing up on
larger input values for float32 but not floa
David Cournapeau wrote:
> On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley wrote:
>
>> Do you know where this conversion is, in the code? The impression I got
>> from my quick look at the code was that a wrapper sinf was defined that
>> just calls sin. I guess the ty
Charles R Harris wrote:
> On Mon, Aug 3, 2009 at 11:51 AM, Andrew Friedley wrote:
>
>> Charles R Harris wrote:
>>> What compiler versions are folks using? In the slow cases, what is the
>>> timing for converting to double, computing the sin, then casting back to
>
Bruce Southey wrote:
> Hi,
> Can you try these from the command line:
> python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000,
> (2*3.14159) / 1000, dtype=np.float32)"
> python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000,
> (2*3.14159) / 1000, dtype=np.float
Charles R Harris wrote:
> What compiler versions are folks using? In the slow cases, what is the
> timing for converting to double, computing the sin, then casting back to
> single?
I did this, is this the right way to do that?
t = timeit.Timer("numpy.sin(a.astype(numpy.float64)).astype(numpy.flo
David Cournapeau wrote:
>> David Cournapeau wrote:
>>> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley
>>> wrote:
>>>> While working on GSoC stuff I came across this weird performance behavior
>>>> for sine and cosine -- using float32 is
Emmanuelle Gouillart wrote:
> Hi Andrew,
>
> %timeit is an Ipython magic command that uses the timeit module,
> see
> http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
> for more information about how to use it. So you were right to suppose
> that it
Thanks for the quick responses.
David Cournapeau wrote:
> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
>> While working on GSoC stuff I came across this weird performance behavior
>> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
>&g
Gael Varoquaux wrote:
> On Sun, Jul 05, 2009 at 02:47:18PM -0400, Andrew Friedley wrote:
>> Stéfan van der Walt wrote:
>>> 2009/7/5 Andrew Friedley :
>>>> I found the check that does the type 'upcasting' in
>>>> umath_ufunc_object.inc around li
While working on GSoC stuff I came across this weird performance
behavior for sine and cosine -- using float32 is way slower than
float64. On a 2ghz opteron:
sin float32 1.12447786331
sin float64 0.133481025696
cos float32 1.14155912399
cos float64 0.131420135498
The times are in seconds, and
Stéfan van der Walt wrote:
> 2009/7/5 Andrew Friedley :
>> I found the check that does the type 'upcasting' in
>> umath_ufunc_object.inc around line 3072 (NumPy 1.3.0). Turns out all I
>> need to do is make sure my add and multiply ufuncs are actually named
>>
7; and arrays will be upcasted appropriately.
Maybe this is worth documenting somewhere, maybe in the UFunc C API? Or
is it documented already, and I just missed it?
Andrew
Andrew Friedley wrote:
> Hi,
>
> I'm trying to understand how integer types are upcast for add/multiply
&g
Hi,
I'm trying to understand how integer types are upcast for add/multiply
operations for my GSoC project (Implementing Ufuncs using CorePy).
The documentation says that for reduction with add/multiply operations,
integer types are 'upcast' to the int_ type (int64 on my system). What
exactly
David Cournapeau wrote:
> Francesc Alted wrote:
>> No, that seems good enough. But maybe you can present results in
>> cycles/item.
>> This is a relatively common unit and has the advantage that it does not
>> depend
>> on the frequency of your cores.
Sure, cycles is fine, but I'll argue th
David Cournapeau wrote:
> Francesc Alted wrote:
>> Well, it is Andrew who should demonstrate that his measurement is correct,
>> but
>> in principle, 4 cycles/item *should* be feasible when using 8 cores in
>> parallel.
>
> But the 100x speed increase is for one core only unless I misread the
>
For some reason the list seems to occasionally drop my messages...
Francesc Alted wrote:
> A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué:
>> I'm the student doing the project. I have a blog here, which contains
>> some initial performance numbers for a coup
Francesc Alted wrote:
> A Friday 22 May 2009 11:42:56 Gregor Thalhammer escrigué:
>> dmitrey schrieb:
>> 3) Improving performance by using multi cores is much more difficult.
>> Only for sufficiently large (>1e5) arrays a significant speedup is
>> possible. Where a speed gain is possible, the MKL
(sending again)
Hi,
I'm the student doing the project. I have a blog here, which contains
some initial performance numbers for a couple test ufuncs I did:
http://numcorepy.blogspot.com
It's really too early yet to give definitive results though; GSoC
officially starts in two days :) What I'
20 matches
Mail list logo