Am 29.04.2014 um 02:01 schrieb Nathaniel Smith :
> On Tue, Apr 29, 2014 at 12:52 AM, Sturla Molden
> wrote:
>> On 29/04/14 01:30, Nathaniel Smith wrote:
>>
>>> I finally read this paper:
>>>
>>>http://www.cs.utexas.edu/users/flame/pubs/blis2_toms_rev2.pdf
>>>
>>> and I have to say that I
2014-05-12 19:25 GMT+02:00 Matthew Brett :
> Hi,
>
> On Mon, May 12, 2014 at 6:01 AM, Matthieu Brucher
> wrote:
> > There is the issue of installing the shared library at the proper
> > location as well IIRC?
>
> As Carl implies, the standard numpy installers do static linking to
> the BLAS lib,
Hi,
On Mon, May 12, 2014 at 6:01 AM, Matthieu Brucher
wrote:
> There is the issue of installing the shared library at the proper
> location as well IIRC?
As Carl implies, the standard numpy installers do static linking to
the BLAS lib, so we haven't (as far as I know) got a proper location
for t
There is the issue of installing the shared library at the proper
location as well IIRC?
2014-05-12 13:54 GMT+01:00 Carl Kleffner :
> Neither the numpy ATLAS build nor the MKL build on Windows makes use of
> shared libs. The latter due due licence restriction.
>
> Carl
>
>
> 2014-05-12 14:23 GMT+0
Neither the numpy ATLAS build nor the MKL build on Windows makes use of
shared libs. The latter due due licence restriction.
Carl
2014-05-12 14:23 GMT+02:00 Matthieu Brucher :
> Yes, they seem to be focused on HPC clusters with sometimes old rules
> (as no shared library).
> Also, they don't us
Yes, they seem to be focused on HPC clusters with sometimes old rules
(as no shared library).
Also, they don't use a potable Makefile generator, not even autoconf,
this may also play a role in Windows support.
2014-05-12 12:52 GMT+01:00 Olivier Grisel :
> BLIS looks interesting. Besides threading
BLIS looks interesting. Besides threading and runtime configuration,
adding support for building it as a shared library would also be
required to be usable by python packages that have several extension
modules that link against a BLAS implementation.
https://code.google.com/p/blis/wiki/FAQ#Can_I_
Hi,
On Mon, Apr 28, 2014 at 5:50 PM, Nathaniel Smith wrote:
> On Tue, Apr 29, 2014 at 1:05 AM, Matthew Brett
> wrote:
>> Hi,
>>
>> On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
>>> On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn
>>> wrote:
Am 11 Apr 2014 um 19:05 schrieb S
On Tue, Apr 29, 2014 at 1:05 AM, Matthew Brett wrote:
> Hi,
>
> On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
>> On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn
>> wrote:
>>>
>>> Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
>>>
Sturla Molden wrote:
> Making a totally
Hi,
On Mon, Apr 28, 2014 at 5:10 PM, Julian Taylor
wrote:
> On 29.04.2014 02:05, Matthew Brett wrote:
>> Hi,
>>
>> On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
>>> On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn
>>> wrote:
Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
>>>
On Tue, Apr 29, 2014 at 1:10 AM, Julian Taylor
wrote:
> On 29.04.2014 02:05, Matthew Brett wrote:
>> Hi,
>>
>> On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
>>> It would be really interesting if someone were to try hacking simple
>>> runtime CPU detection into BLIS and see how far you c
On 29.04.2014 02:05, Matthew Brett wrote:
> Hi,
>
> On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
>> On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn
>> wrote:
>>>
>>> Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
>>>
Sturla Molden wrote:
> Making a totally new BLAS mig
Hi,
On Mon, Apr 28, 2014 at 4:30 PM, Nathaniel Smith wrote:
> On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn
> wrote:
>>
>> Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
>>
>>> Sturla Molden wrote:
>>>
Making a totally new BLAS might seem like a crazy idea, but it might be the
best
On Tue, Apr 29, 2014 at 12:52 AM, Sturla Molden wrote:
> On 29/04/14 01:30, Nathaniel Smith wrote:
>
>> I finally read this paper:
>>
>> http://www.cs.utexas.edu/users/flame/pubs/blis2_toms_rev2.pdf
>>
>> and I have to say that I'm no longer so convinced that OpenBLAS is the
>> right starting
On 29/04/14 01:30, Nathaniel Smith wrote:
> I finally read this paper:
>
> http://www.cs.utexas.edu/users/flame/pubs/blis2_toms_rev2.pdf
>
> and I have to say that I'm no longer so convinced that OpenBLAS is the
> right starting point.
I think OpenBLAS in the long run is doomed as an OSS proj
On Mon, Apr 28, 2014 at 11:25 AM, Michael Lehn wrote:
>
> Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
>
>> Sturla Molden wrote:
>>
>>> Making a totally new BLAS might seem like a crazy idea, but it might be the
>>> best solution in the long run.
>>
>> To see if this can be done, I'll try to r
Am 11 Apr 2014 um 19:05 schrieb Sturla Molden :
> Sturla Molden wrote:
>
>> Making a totally new BLAS might seem like a crazy idea, but it might be the
>> best solution in the long run.
>
> To see if this can be done, I'll try to re-implement cblas_dgemm and then
> benchmark against MKL, Acce
Agree that OpenBLAS is the most favorable route instead of starting from
scratch.
Btw, why is sparse BLAS not included as I've always been under the
impression that scipy sparse supports BLAS - no?
___
NumPy-Discussion mailing list
NumPy-Discussion
Okay, I started taking notes here:
https://github.com/numpy/numpy/wiki/BLAS-desiderata
Please add as appropriate...
-n
On Sat, Apr 12, 2014 at 12:19 AM, Nathaniel Smith wrote:
> On Fri, Apr 11, 2014 at 7:29 PM, Julian Taylor
> wrote:
>> x86 cpus are backward compatible with almost all instru
On 12/04/14 01:07, Sturla Molden wrote:
>> ATM the only other way to work with
>> a data set that's larger than memory-divided-by-numcpus is to
>> explicitly set up shared memory, and this is *really* hard for
>> anything more complicated than a single flat array.
>
>
> Not difficult. You just go
On Fri, Apr 11, 2014 at 7:29 PM, Julian Taylor
wrote:
> x86 cpus are backward compatible with almost all instructions they ever
> introduced, so one machine with the latest instruction set supported is
> sufficient to test almost everything.
> For that the runtime kernel selection must be tuneable
On Sat, Apr 12, 2014 at 12:07 AM, Sturla Molden wrote:
> On 12/04/14 00:39, Nathaniel Smith wrote:
>
>> The spawn mode is fine and all, but (a) the presence of something in
>> 3.4 helps only a minority of users, (b) "spawn" is not a full
>> replacement for fork;
>
> It basically does the same as o
On 12/04/14 00:39, Nathaniel Smith wrote:
> The spawn mode is fine and all, but (a) the presence of something in
> 3.4 helps only a minority of users, (b) "spawn" is not a full
> replacement for fork;
It basically does the same as on Windows. If you want portability to
Windows, you must abide by
On Fri, Apr 11, 2014 at 11:26 PM, Sturla Molden wrote:
> On 11/04/14 20:47, Nathaniel Smith wrote:
>
>> Also, while Windows is maybe in the worst shape, all platforms would
>> seriously benefit from the existence of a reliable speed-competitive
>> binary-distribution-compatible BLAS that doesn't b
On 11/04/14 20:47, Nathaniel Smith wrote:
> Also, while Windows is maybe in the worst shape, all platforms would
> seriously benefit from the existence of a reliable speed-competitive
> binary-distribution-compatible BLAS that doesn't break fork().
Windows is worst off, yes.
I don't think fork b
On 12/04/14 00:01, Matthew Brett wrote:
> No - sure - but it would be frustrating if you found yourself
> optimizing with a compiler that is useless for subsequent open-source
> builds.
No, I think MSVC or gcc 4.8/4.9 will work too. It's just that I happen
to have icc and clang on this computer
On Fri, Apr 11, 2014 at 2:58 PM, Sturla Molden wrote:
> On 11/04/14 23:11, Matthew Brett wrote:
>
>> Are you sure that you can redistribute object code statically linked
>> against icc runtimes?
>
> I am not a lawyer...
No - sure - but it would be frustrating if you found yourself
optimizing with
On 11/04/14 23:11, Matthew Brett wrote:
> Are you sure that you can redistribute object code statically linked
> against icc runtimes?
I am not a lawyer...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/li
Hi,
On Fri, Apr 11, 2014 at 10:05 AM, Sturla Molden wrote:
> Sturla Molden wrote:
>
>> Making a totally new BLAS might seem like a crazy idea, but it might be the
>> best solution in the long run.
>
> To see if this can be done, I'll try to re-implement cblas_dgemm and then
> benchmark against M
On Fri, Apr 11, 2014 at 7:53 PM, Julian Taylor
wrote:
> On 11.04.2014 18:03, Nathaniel Smith wrote:
>> On Fri, Apr 11, 2014 at 12:21 PM, Carl Kleffner wrote:
>>> a discussion about OpenBLAS on the octave maintainer list:
>>> http://article.gmane.org/gmane.comp.gnu.octave.maintainers/38746
>>
>> I
On 11.04.2014 18:03, Nathaniel Smith wrote:
> On Fri, Apr 11, 2014 at 12:21 PM, Carl Kleffner wrote:
>> a discussion about OpenBLAS on the octave maintainer list:
>> http://article.gmane.org/gmane.comp.gnu.octave.maintainers/38746
>
> I'm getting the impression that OpenBLAS is being both a tanta
On Fri, Apr 11, 2014 at 6:05 PM, Sturla Molden wrote:
> Sturla Molden wrote:
>
>> Making a totally new BLAS might seem like a crazy idea, but it might be the
>> best solution in the long run.
>
> To see if this can be done, I'll try to re-implement cblas_dgemm and then
> benchmark against MKL, Ac
On 11.04.2014 19:05, Sturla Molden wrote:
> Sturla Molden wrote:
>
>> Making a totally new BLAS might seem like a crazy idea, but it might be the
>> best solution in the long run.
>
> To see if this can be done, I'll try to re-implement cblas_dgemm and then
> benchmark against MKL, Accelerate a
Hi,
On Fri, Apr 11, 2014 at 9:03 AM, Nathaniel Smith wrote:
> On Fri, Apr 11, 2014 at 12:21 PM, Carl Kleffner wrote:
>> a discussion about OpenBLAS on the octave maintainer list:
>> http://article.gmane.org/gmane.comp.gnu.octave.maintainers/38746
>
> I'm getting the impression that OpenBLAS is b
On 11.04.2014 18:03, Nathaniel Smith wrote:
> On Fri, Apr 11, 2014 at 12:21 PM, Carl Kleffner wrote:
>> a discussion about OpenBLAS on the octave maintainer list:
>> http://article.gmane.org/gmane.comp.gnu.octave.maintainers/38746
>
> I'm getting the impression that OpenBLAS is being both a tanta
Sturla Molden wrote:
> Making a totally new BLAS might seem like a crazy idea, but it might be the
> best solution in the long run.
To see if this can be done, I'll try to re-implement cblas_dgemm and then
benchmark against MKL, Accelerate and OpenBLAS. If I can get the
performance better than
Nathaniel Smith wrote:
> I unfortunately don't have the skills to actually lead such an effort
> (I've never written a line of asm in my life...), but surely our
> collective communities have people who do?
The assembly part in OpenBLAS/GotoBLAS is the major problem. Not just that
it's AT&T synt
On Fri, Apr 11, 2014 at 12:21 PM, Carl Kleffner wrote:
> a discussion about OpenBLAS on the octave maintainer list:
> http://article.gmane.org/gmane.comp.gnu.octave.maintainers/38746
I'm getting the impression that OpenBLAS is being both a tantalizing
opportunity and a practical thorn-in-the-side
38 matches
Mail list logo