> 2009/8/5 Andrew Friedley :
>
>> Is anyone with this problem *not* running ubuntu?
>
> Me - RHEL 5.2 opteron:
>
> Python 2.6.1 (r261:67515, Jan 5 2009, 10:19:01)
> [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
>
> Fedora 9 PS3/PPC:
>
> Python 2.5.1 (r251:54863, Jul 17 2008, 13:25:23)
> [GCC 4
Is anyone with this problem *not* running ubuntu?
Me - RHEL 5.2 opteron:
Python 2.6.1 (r261:67515, Jan 5 2009, 10:19:01)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Fedora 9 PS3/PPC:
Python 2.5.1 (r251:54863, Jul 17 2008, 13:25:23)
[GCC 4.3.1 20080708 (Red Hat 4.3.1-4)] on linux2
A
On Tue, Aug 4, 2009 at 9:42 PM, Charles R
Harris wrote:
>
>
> On Tue, Aug 4, 2009 at 7:18 PM, Jochen wrote:
>>
>> Hi all,
>> I see something similar on my system.
>> OK I've just done a test. System is Ubuntu 9.04 AMD64
>> there seems to be a regression for float32 with high values:
>>
>> In [47]:
Charles R Harris gmail.com> writes:
>
>
> Is anyone with this problem *not* running ubuntu?Chuck
>
All I can say is that it (surprisingly?) doesn't appear to affect my windoze
(XP) box.
Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)]
In [2]: a=np.random.rand(1
On Tue, Aug 4, 2009 at 7:18 PM, Jochen wrote:
> Hi all,
> I see something similar on my system.
> OK I've just done a test. System is Ubuntu 9.04 AMD64
> there seems to be a regression for float32 with high values:
>
> In [47]: a=np.random.rand(1).astype(np.float32)
>
> In [48]: b=np.random.r
Hi all,
I see something similar on my system.
OK I've just done a test. System is Ubuntu 9.04 AMD64
there seems to be a regression for float32 with high values:
In [47]: a=np.random.rand(1).astype(np.float32)
In [48]: b=np.random.rand(1).astype(np.float64)
In [49]: c=1000*np.random.rand
On Tuesday 04 August 2009 19:19:22 Andrew Friedley wrote:
> OK, have some interesting results. First is my array creation was not
> doing what I thought it was. This (what I've been doing) creates an
> array of 159161 elements:
>
> numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32
Charles R Harris wrote:
> Depends on the CPU, FPU and the compiler flags. The computations could very
> well be done using double precision internally with conversions on
> load/store.
Sure, but if this is the case, why is the performance blowing up on
larger input values for float32 but not floa
On Tue, Aug 4, 2009 at 11:19 AM, Andrew Friedley wrote:
> David Cournapeau wrote:
> > On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley
> wrote:
> >
> >> Do you know where this conversion is, in the code? The impression I got
> >> from my quick look at the code was that a wrapper sinf was defined
On Tue, Aug 4, 2009 at 12:19, Andrew Friedley wrote:
> OK, have some interesting results. First is my array creation was not
> doing what I thought it was. This (what I've been doing) creates an
> array of 159161 elements:
>
> numpy.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=numpy.float32)
>
>
David Cournapeau wrote:
> On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley wrote:
>
>> Do you know where this conversion is, in the code? The impression I got
>> from my quick look at the code was that a wrapper sinf was defined that
>> just calls sin. I guess the typecast to float in there will
On Wed, Aug 5, 2009 at 12:14 AM, Andrew Friedley wrote:
> Do you know where this conversion is, in the code? The impression I got
> from my quick look at the code was that a wrapper sinf was defined that
> just calls sin. I guess the typecast to float in there will do the
> conversion
Exact. Gi
Charles R Harris wrote:
> On Mon, Aug 3, 2009 at 11:51 AM, Andrew Friedley wrote:
>
>> Charles R Harris wrote:
>>> What compiler versions are folks using? In the slow cases, what is the
>>> timing for converting to double, computing the sin, then casting back to
>>> single?
>> I did this, is this
Bruce Southey wrote:
> Hi,
> Can you try these from the command line:
> python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000,
> (2*3.14159) / 1000, dtype=np.float32)"
> python -m timeit -n 100 -s "import numpy as np; a = np.arange(0.0, 1000,
> (2*3.14159) / 1000, dtype=np.float
On 08/03/2009 12:51 PM, Andrew Friedley wrote:
Charles R Harris wrote:
What compiler versions are folks using? In the slow cases, what is the
timing for converting to double, computing the sin, then casting back to
single?
I did this, is this the right way to do that?
t = timeit.Tim
On Mon, Aug 3, 2009 at 11:51 AM, Andrew Friedley wrote:
> Charles R Harris wrote:
> > What compiler versions are folks using? In the slow cases, what is the
> > timing for converting to double, computing the sin, then casting back to
> > single?
>
> I did this, is this the right way to do that?
>
Charles R Harris wrote:
> What compiler versions are folks using? In the slow cases, what is the
> timing for converting to double, computing the sin, then casting back to
> single?
I did this, is this the right way to do that?
t = timeit.Timer("numpy.sin(a.astype(numpy.float64)).astype(numpy.flo
David Cournapeau wrote:
>> David Cournapeau wrote:
>>> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley
>>> wrote:
While working on GSoC stuff I came across this weird performance behavior
for sine and cosine -- using float32 is way slower than float64. On a 2ghz
opteron:
On Mon, Aug 3, 2009 at 10:23 AM, Chris Colbert wrote:
> I get similar results as the OP:
>
>
> In [1]: import numpy as np
>
> In [2]: a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)
>
> In [3]: b = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)
>
> In [4]: %timeit -n 10
On Mon, Aug 03, 2009 at 08:17:21AM -0700, Keith Goodman wrote:
> On Mon, Aug 3, 2009 at 7:21 AM, Emmanuelle
> Gouillart wrote:
> >> import numpy as np
> >> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
> >> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.fl
I get similar results as the OP:
In [1]: import numpy as np
In [2]: a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)
In [3]: b = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)
In [4]: %timeit -n 10 np.sin(a)
10 loops, best of 3: 63.8 ms per loop
In [5]: %timeit -n 10
On Mon, Aug 3, 2009 at 7:21 AM, Emmanuelle
Gouillart wrote:
>> import numpy as np
>> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
>> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
>> %timeit -n 10 np.sin(a)
>> > 10 loops, best of 3: 8.67 ms
On Mon, Aug 3, 2009 at 11:08 PM, Andrew Friedley wrote:
> Thanks for the quick responses.
>
> David Cournapeau wrote:
>> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
>>> While working on GSoC stuff I came across this weird performance behavior
>>> for sine and cosine -- using float32 is
On Mon, Aug 3, 2009 at 10:21 AM, Emmanuelle
Gouillart wrote:
>> import numpy as np
>> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
>> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
>> %timeit -n 10 np.sin(a)
>> > 10 loops, best of 3: 8.67 ms
> import numpy as np
> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
> %timeit -n 10 np.sin(a)
> > 10 loops, best of 3: 8.67 ms per loop
> %timeit -n 10 np.sin(b)
> > 10 loops, best of 3:
Emmanuelle Gouillart wrote:
> Hi Andrew,
>
> %timeit is an Ipython magic command that uses the timeit module,
> see
> http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
> for more information about how to use it. So you were right to suppose
> that it
Thanks for the quick responses.
David Cournapeau wrote:
> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
>> While working on GSoC stuff I came across this weird performance behavior
>> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
>> opteron:
>>
>> sin float3
Hi Andrew,
%timeit is an Ipython magic command that uses the timeit module,
see
http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
for more information about how to use it. So you were right to suppose
that it is not a "normal Python".
Ho
On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
> While working on GSoC stuff I came across this weird performance behavior
> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
> opteron:
>
> sin float32 1.12447786331
> sin float64 0.133481025696
> cos float32 1.141
While working on GSoC stuff I came across this weird performance
behavior for sine and cosine -- using float32 is way slower than
float64. On a 2ghz opteron:
sin float32 1.12447786331
sin float64 0.133481025696
cos float32 1.14155912399
cos float64 0.131420135498
The times are in seconds, and
30 matches
Mail list logo