On Mon, Aug 3, 2009 at 8:42 PM, David Cournapeau <
da...@ar.media.kyoto-u.ac.jp> wrote:
> Hi All,
>
>I (David Cournapeau) and the people at Berkeley (Jarrod Millman,
> Fernando Perez, Matthew Brett) have been in discussion so that I could
> do some funded work on NumPy/SciPy. Although they are
Matthew Brett wrote:
> Hi,
>
> We are using numpy.distutils, and have run into this odd behavior in windows:
>
> I have XP, Mingw, latest numpy SVN, python.org python 2.6. All the
> commands below I am running from within the 'numpy' root directory
> (where 'numpy' is a subdirectory).
>
> If I run
Hi All,
I (David Cournapeau) and the people at Berkeley (Jarrod Millman,
Fernando Perez, Matthew Brett) have been in discussion so that I could
do some funded work on NumPy/SciPy. Although they are obviously
interested in improvements that help their own projects, they are
willing to make sure
Hi,
We are using numpy.distutils, and have run into this odd behavior in windows:
I have XP, Mingw, latest numpy SVN, python.org python 2.6. All the
commands below I am running from within the 'numpy' root directory
(where 'numpy' is a subdirectory).
If I run
python setup.py build
I get the f
Raymond de Vries wrote:
> Oops, I guess I didn't express myself clearly enough: I have used plain
> Python c-api (in my case a list of lists for my 2-dimensional arrays)
> for my typemaps. Sorry for the unclearness. Actually because NumPy is
> not my cup of tea...
Well, for almost any purpose
On Mon, Aug 03, 2009 at 02:26:17PM -0700, David Goldsmith wrote:
> Please remind: BoF = ?
http://conference.scipy.org/bofs
G.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Please remind: BoF = ?
DG
--- On Mon, 8/3/09, Chris Kees wrote:
> From: Chris Kees
> Subject: [Numpy-discussion] PDE BoF at SciPy2009
> To: "Discussion of Numerical Python"
> Date: Monday, August 3, 2009, 12:57 PM
> Is there any interest in a BoF
> session on implementing numerical
> metho
Hi Chris,
>> Thanks for the explanation. After having looked at the documentation, I
>> decided to do my own plain Python c-api implementation.
>>
>
> That is unlikely to be the best option these days -- it's simply too
> easy to make a type checking and or reference counting error.
>
> If
Is there any interest in a BoF session on implementing numerical
methods for partial differential equations using modules like numpy,
cython, mpi4py, etc.?
Regards,
Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy
Raymond de Vries wrote:
> Thanks for the explanation. After having looked at the documentation, I
> decided to do my own plain Python c-api implementation.
That is unlikely to be the best option these days -- it's simply too
easy to make a type checking and or reference counting error.
If SWIG
On 08/03/2009 12:51 PM, Andrew Friedley wrote:
Charles R Harris wrote:
What compiler versions are folks using? In the slow cases, what is the
timing for converting to double, computing the sin, then casting back to
single?
I did this, is this the right way to do that?
t = timeit.Tim
On Mon, Aug 3, 2009 at 11:51 AM, Andrew Friedley wrote:
> Charles R Harris wrote:
> > What compiler versions are folks using? In the slow cases, what is the
> > timing for converting to double, computing the sin, then casting back to
> > single?
>
> I did this, is this the right way to do that?
>
Charles R Harris wrote:
> What compiler versions are folks using? In the slow cases, what is the
> timing for converting to double, computing the sin, then casting back to
> single?
I did this, is this the right way to do that?
t = timeit.Timer("numpy.sin(a.astype(numpy.float64)).astype(numpy.flo
David Cournapeau wrote:
>> David Cournapeau wrote:
>>> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley
>>> wrote:
While working on GSoC stuff I came across this weird performance behavior
for sine and cosine -- using float32 is way slower than float64. On a 2ghz
opteron:
On Mon, Aug 3, 2009 at 10:23 AM, Chris Colbert wrote:
> I get similar results as the OP:
>
>
> In [1]: import numpy as np
>
> In [2]: a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)
>
> In [3]: b = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)
>
> In [4]: %timeit -n 10
On Mon, Aug 03, 2009 at 08:17:21AM -0700, Keith Goodman wrote:
> On Mon, Aug 3, 2009 at 7:21 AM, Emmanuelle
> Gouillart wrote:
> >> import numpy as np
> >> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
> >> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.fl
I get similar results as the OP:
In [1]: import numpy as np
In [2]: a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)
In [3]: b = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)
In [4]: %timeit -n 10 np.sin(a)
10 loops, best of 3: 63.8 ms per loop
In [5]: %timeit -n 10
On Mon, Aug 3, 2009 at 7:21 AM, Emmanuelle
Gouillart wrote:
>> import numpy as np
>> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
>> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
>> %timeit -n 10 np.sin(a)
>> > 10 loops, best of 3: 8.67 ms
On Mon, Aug 3, 2009 at 11:08 PM, Andrew Friedley wrote:
> Thanks for the quick responses.
>
> David Cournapeau wrote:
>> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
>>> While working on GSoC stuff I came across this weird performance behavior
>>> for sine and cosine -- using float32 is
On Mon, Aug 3, 2009 at 10:21 AM, Emmanuelle
Gouillart wrote:
>> import numpy as np
>> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
>> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
>> %timeit -n 10 np.sin(a)
>> > 10 loops, best of 3: 8.67 ms
> import numpy as np
> a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
> b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
> %timeit -n 10 np.sin(a)
> > 10 loops, best of 3: 8.67 ms per loop
> %timeit -n 10 np.sin(b)
> > 10 loops, best of 3:
Emmanuelle Gouillart wrote:
> Hi Andrew,
>
> %timeit is an Ipython magic command that uses the timeit module,
> see
> http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
> for more information about how to use it. So you were right to suppose
> that it
Thanks for the quick responses.
David Cournapeau wrote:
> On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
>> While working on GSoC stuff I came across this weird performance behavior
>> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
>> opteron:
>>
>> sin float3
Hi Andrew,
%timeit is an Ipython magic command that uses the timeit module,
see
http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
for more information about how to use it. So you were right to suppose
that it is not a "normal Python".
Ho
On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley wrote:
> While working on GSoC stuff I came across this weird performance behavior
> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
> opteron:
>
> sin float32 1.12447786331
> sin float64 0.133481025696
> cos float32 1.141
Gael Varoquaux wrote:
> On Sun, Jul 05, 2009 at 02:47:18PM -0400, Andrew Friedley wrote:
>> Stéfan van der Walt wrote:
>>> 2009/7/5 Andrew Friedley :
I found the check that does the type 'upcasting' in
umath_ufunc_object.inc around line 3072 (NumPy 1.3.0). Turns out all I
need to do
While working on GSoC stuff I came across this weird performance
behavior for sine and cosine -- using float32 is way slower than
float64. On a 2ghz opteron:
sin float32 1.12447786331
sin float64 0.133481025696
cos float32 1.14155912399
cos float64 0.131420135498
The times are in seconds, and
On Mon, Aug 3, 2009 at 6:32 PM, Steven Coutts wrote:
> David Cournapeau wrote:
>
>>
>> Do you mean that if you build with debug information, everything else
>> being equal, you cannot reproduce the crashes ?
>>
>> cheers,
>>
>> David
>
> That does appear to be the case, SciPy 1.7.0 is now also runn
David Cournapeau wrote:
>
> Do you mean that if you build with debug information, everything else
> being equal, you cannot reproduce the crashes ?
>
> cheers,
>
> David
That does appear to be the case, SciPy 1.7.0 is now also running fine.
Regards
Steven Coutts wrote:
>
>
> Sorry ignore this, I cleanded out numpy properly, re-installed 1.3.0 and the
> tests are all running now.
>
Do you mean that if you build with debug information, everything else
being equal, you cannot reproduce the crashes ?
cheers,
David
__
Steven Coutts couttsnet.com> writes:
>
> Ok I have rebuilt numpy-1.3.0 with debugging, and it segfaults as soon as I
> import numpy in python2.5
>
> Backtrace -:
> http://pastebin.com/d27fbd2a5
>
> Regards
>
Sorry ignore this, I cleanded out numpy properly, re-installed 1.3.0 and the
tests
David Cournapeau gmail.com> writes:
>
> I forgot: another thing which would be helpful since you can reproduce
> the bug would be to build a debug version of numpy (python setup.py
> build_ext -g), and reproduce the bug under gdb to have a traceback.
>
> David
Ok I have rebuilt numpy-1.3.0 wit
32 matches
Mail list logo