On 16 May 2013 19:48, "Jonathan Helmus" wrote:
>
> On 05/16/2013 01:42 PM, Neal Becker wrote:
> > Is there a way to get a traceback instead of just printing the
> > line that triggered the error?
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Di
On 17 May 2013 05:19, "Christopher Jordan-Squire" wrote:
>
> I'd been under the impression that the easiest way to get SSE support
> was to have numpy use an optimized blas/lapack. Is that not the case?
Apples and oranges. That's the easiest (only) way to get SSE support for
operations that go th
I'd been under the impression that the easiest way to get SSE support
was to have numpy use an optimized blas/lapack. Is that not the case?
On Thu, May 16, 2013 at 10:42 AM, Julian Taylor
wrote:
> Hi,
> I have been experimenting a bit with how applicable SSE vectorization is
> to NumPy.
> In prin
On Thu, May 16, 2013 at 6:09 PM, Phillip Feldman <
phillip.m.feld...@gmail.com> wrote:
> It seems odd that `nanmin` and `nanmax` are in NumPy, while `nanmean` is
> in SciPy.stats. I'd like to propose that a `nanmean` function be added to
> NumPy.
>
> Have no fear. There is already plans for its
It seems odd that `nanmin` and `nanmax` are in NumPy, while `nanmean` is in
SciPy.stats. I'd like to propose that a `nanmean` function be added to
NumPy.
Phillip
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/
On 05/16/2013 01:42 PM, Neal Becker wrote:
> Is there a way to get a traceback instead of just printing the
> line that triggered the error?
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/nump
Is there a way to get a traceback instead of just printing the
line that triggered the error?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
I have been experimenting a bit with how applicable SSE vectorization is
to NumPy.
In principle the core of NumPy mostly deals with memory bound
operations, but it turns out on modern machines with large caches you
can still get decent speed ups.
The experiments are available on this fork:
htt
I looked yesterday rapidly in the code and didn't find the reason (I don't
know it well, that is probably why).
But last night I think of one possible cause. I found this code 2 times in
the file core/src/umath/ufunc_object.c:
if (nin == 2 && nout == 1 && dtypes[1]->type_num == NPY_OBJECT) {
Hi everyone,
(this was posted as part of another topic, but since it was unrelated,
I'm reposting as a separate thread)
I've also been having issues with __array_priority__ - the following
code behaves differently for __mul__ and __rmul__:
"""
import numpy as np
class TestClass(object):
d
On Thu, May 16, 2013 at 1:32 PM, Martin Raspaud wrote:
> On 16/05/13 10:26, Robert Kern wrote:
>
>>> Can anyone give a reasonable explanation ?
>>
>> memory_profiler only looks at the amount of memory that the OS has
>> allocated to the Python process. It cannot measure the amount of
>> memory act
On 16/05/13 10:26, Robert Kern wrote:
>> Can anyone give a reasonable explanation ?
>
> memory_profiler only looks at the amount of memory that the OS has
> allocated to the Python process. It cannot measure the amount of
> memory actually given to living objects. Python does not always return
>
On Thu, May 16, 2013 at 8:35 AM, Martin Raspaud wrote:
> Hi all,
>
> In the context of memory profiling an application (with memory_profiler
> module) we came up a strange behaviour in numpy, see for yourselves:
>
> Line #Mem usageIncrement Line Contents
> ===
Hi all,
In the context of memory profiling an application (with memory_profiler
module) we came up a strange behaviour in numpy, see for yourselves:
Line #Mem usageIncrement Line Contents
29 @profile
30
14 matches
Mail list logo