On Fri, Jan 16, 2015 at 8:15 AM, Charles R Harris wrote:
>
>
> On Fri, Jan 16, 2015 at 7:11 AM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> On Fri, Jan 16, 2015 at 3:33 AM, Lars Buitinck
>> wrote:
>>
>>> 2015-01-16 11:55 GMT+01:00 :
>>> > Message: 2
>>> > Date: Thu, 15 Jan 201
On Fri, Jan 16, 2015 at 7:11 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Fri, Jan 16, 2015 at 3:33 AM, Lars Buitinck wrote:
>
>> 2015-01-16 11:55 GMT+01:00 :
>> > Message: 2
>> > Date: Thu, 15 Jan 2015 21:24:00 -0800
>> > From: Jaime Fern?ndez del R?o
>> > Subject: [Numpy-di
I agree; an np.setnumthreads to manage a numpy-global threadpool makes
sense to me.
Of course there are a great many cases where just spawning as many threads
as cores is a sensible default, but if this kind of behavior could not be
overridden I could see that greatly reduce performance for some o
On 01/16/2015 03:14 PM, Lars Buitinck wrote:
> 2015-01-16 13:29 GMT+01:00 :
>> Date: Fri, 16 Jan 2015 12:43:43 +0100
>> From: Julian Taylor
>> Subject: Re: [Numpy-discussion] Sorting refactor
>> To: Discussion of Numerical Python
>> Message-ID: <54b8f96f.7090...@googlemail.com>
>> Content-Type:
2015-01-16 15:14 GMT+01:00 :
> Date: Fri, 16 Jan 2015 06:11:29 -0800
> From: Jaime Fern?ndez del R?o
> Subject: Re: [Numpy-discussion] Sorting refactor
> To: Discussion of Numerical Python
>
> Most of my proposed original changes do not affect the core sorting
> functionality, just the infrastru
On Fri, Jan 16, 2015 at 3:33 AM, Lars Buitinck wrote:
> 2015-01-16 11:55 GMT+01:00 :
> > Message: 2
> > Date: Thu, 15 Jan 2015 21:24:00 -0800
> > From: Jaime Fern?ndez del R?o
> > Subject: [Numpy-discussion] Sorting refactor
> > To: Discussion of Numerical Python
> > Message-ID:
> > <
2015-01-16 13:29 GMT+01:00 :
> Date: Fri, 16 Jan 2015 12:43:43 +0100
> From: Julian Taylor
> Subject: Re: [Numpy-discussion] Sorting refactor
> To: Discussion of Numerical Python
> Message-ID: <54b8f96f.7090...@googlemail.com>
> Content-Type: text/plain; charset=windows-1252
>
> On 16.01.2015 12
On Fri, Jan 16, 2015 at 4:19 AM, Matthew Brett
wrote:
> Hi,
>
> On Fri, Jan 16, 2015 at 5:24 AM, Jaime Fernández del Río
> wrote:
> > Hi all,
> >
> > I have been taking a deep look at the sorting functionality in numpy,
> and I
> > think it could use a face lift in the form of a big code refacto
On 16 January 2015 at 13:15, Eelco Hoogendoorn
wrote:
> Perhaps an algorithm can be made faster, but often these multicore
> algorithms are also less efficient, and a less data-dependent way of putting
> my cores to good use would have been preferable. Potentially, other code
> could slow down due
Hi,
On Fri, Jan 16, 2015 at 5:24 AM, Jaime Fernández del Río
wrote:
> Hi all,
>
> I have been taking a deep look at the sorting functionality in numpy, and I
> think it could use a face lift in the form of a big code refactor, to get
> rid of some of the ugliness in the code and make it easier to
I don't know if there is a general consensus or guideline on these matters,
but I am personally not entirely charmed by the use of behind-the-scenes
parallelism, unless explicitly requested.
Perhaps an algorithm can be made faster, but often these multicore
algorithms are also less efficient, and
On 16.01.2015 12:33, Lars Buitinck wrote:
> 2015-01-16 11:55 GMT+01:00 :
>> Message: 2
>> Date: Thu, 15 Jan 2015 21:24:00 -0800
>> From: Jaime Fern?ndez del R?o
>> Subject: [Numpy-discussion] Sorting refactor
>> To: Discussion of Numerical Python
>> Message-ID:
>>
>> Content-Type: text/
2015-01-16 11:55 GMT+01:00 :
> Message: 2
> Date: Thu, 15 Jan 2015 21:24:00 -0800
> From: Jaime Fern?ndez del R?o
> Subject: [Numpy-discussion] Sorting refactor
> To: Discussion of Numerical Python
> Message-ID:
>
> Content-Type: text/plain; charset="utf-8"
>
> This changes will make it
Daniel Smith icloud.com> writes:
>
> Hello everyone,I originally brought an optimized einsum routine
forward a few weeks back that attempts to contract numpy arrays together
in an optimal way. This can greatly reduce the scaling and overall cost
of the einsum expression for the cost of a few
Thanks for taking the time to think about this; good work.
Personally, I don't think a factor 5 memory overhead is much to sweat over.
The most complex einsum I have ever needed in a production environment was
5/6 terms, and for what this anecdote is worth, speed was a far
bigger concern to me tha
15 matches
Mail list logo