> If you have some spare cycles, maybe you can open a pull request to add
> np.isclose to the "See Also" section?
That would be great.
Remember that equality for flits is bit-for but equality ( baring NaN
and inf...).
But you hardly ever actually want to do that with floats.
But probably np.all
Thanks a lot everyone!
I am time and again amazed by how optimized numpy is! Hats off to you guys!
R
On Thu, Dec 17, 2015 at 11:02 PM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Thu, Dec 17, 2015 at 7:37 PM, CJ Carey
> wrote:
>
>> I believe this line is the reason:
>>
>> https
On Thu, Dec 17, 2015 at 7:37 PM, CJ Carey wrote:
> I believe this line is the reason:
>
> https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110
>
The magic actually happens in count_nonzero_bytes_384, a few lines before
tha
Thanks everyone for helping me glimpse the secret world of FORTRAN
compilers. I am running a Linux machine, so I will look into MKL and
openBLAS. It was easy for me to get a Intel parallel studio XE license
as a student, so I have options.
___
NumPy-D
Would it make sense to at all to bring that optimization to np.sum()? I
know that I have np.sum() all over the place instead of count_nonzero,
partly because it is a MatLab-ism and partly because it is easier to write.
I had no clue that there was a performance difference.
Cheers!
Ben Root
On Th
I believe this line is the reason:
https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110
On Thu, Dec 17, 2015 at 11:52 AM, Raghav R V wrote:
> I was just playing with `count_nonzero` and found it to be significantly
> faster
Hi,
I just ran both on the same hardware and got a slightly faster computation
with numpy:
Matlab R2012a: 16.78 s (best of 3)
numpy (python 3.4, numpy 1.10.1, anaconda accelerate (MKL)): 14.8 s (best
of 3)
The difference could because my Matlab version is a few years old, so it's
MKL would b
I was just playing with `count_nonzero` and found it to be significantly
faster for boolean arrays compared to integer arrays
>>> a = np.random.randint(0, 2, (100, 5))
>>> a_bool = a.astype(bool)
>>> %timeit np.sum(a)
10 loops, best of 3: 5.64 µs per loop
>>> %timeit np.
On Thu, Dec 17, 2015 at 5:52 AM, Sturla Molden
wrote:
> On 17/12/15 12:06, Francesc Alted wrote:
>
> Pretty good. I did not know that OpenBLAS was so close in performance
>> to MKL.
>>
>
> MKL, OpenBLAS and Accelerate are very close in performance, except for
> level-1 BLAS where Accelerate and
On Do, 2015-12-17 at 13:43 +, Nico Schlömer wrote:
> Hi everyone,
>
>
> I noticed a funny behavior in numpy's array_equal. The two arrays
> ```
> a1 = numpy.array(
> [3.14159265358979320],
> dtype=numpy.float64
> )
> a2 = numpy.array(
> [3.14159265358979329],
> dtype=numpy
On 17 December 2015 at 14:43, Nico Schlömer
wrote:
> I'm not sure where I'm going wrong here. Any hints?
You are dancing around the boundary between close floating point numbers,
and when you are dealing with ULPs, number of decimal places is a bad
measure. Working with plain numbers, instead o
Hi everyone,
I noticed a funny behavior in numpy's array_equal. The two arrays
```
a1 = numpy.array(
[3.14159265358979320],
dtype=numpy.float64
)
a2 = numpy.array(
[3.14159265358979329],
dtype=numpy.float64
)
```
(differing the in the 18th overall digit) are reported equal
On 16/12/15 20:47, Derek Homeier wrote:
Getting around 30 s wall time here on a not so recent 4-core iMac, so that
would seem to fit
(iirc Accelerate should actually largely be using the same machine code as MKL).
Yes, the same kernels, but not the same threadpool. Accelerate uses the
GCD, M
On 17/12/15 12:06, Francesc Alted wrote:
Pretty good. I did not know that OpenBLAS was so close in performance
to MKL.
MKL, OpenBLAS and Accelerate are very close in performance, except for
level-1 BLAS where Accelerate and MKL are better than OpenBLAS.
MKL requires the number of threads t
2015-12-17 12:00 GMT+01:00 Daπid :
> On 16 December 2015 at 18:59, Francesc Alted wrote:
>
>> Probably MATLAB is shipping with Intel MKL enabled, which probably is the
>> fastest LAPACK implementation out there. NumPy supports linking with MKL,
>> and actually Anaconda does that by default, so s
On 16 December 2015 at 18:59, Francesc Alted wrote:
> Probably MATLAB is shipping with Intel MKL enabled, which probably is the
> fastest LAPACK implementation out there. NumPy supports linking with MKL,
> and actually Anaconda does that by default, so switching to Anaconda would
> be a good opt
16 matches
Mail list logo