On Sat, Jul 26, 2014 at 5:19 PM, Sturla Molden
wrote:
> Robert Kern wrote:
>
> >> It would presumably require a global threading.RLock for protecting the
> >> global state.
> >
> > We would use thread-local storage like we currently do with the
> > np.errstate() context manager. Each thread will
I'm attempting to build Numpy from source in order to do some development.
I've cloned the github repo and installed the pre-reqs for Ubuntu
http://www.scipy.org/scipylib/building/linux.html#debian-ubuntu
However, when I do
>>> python setup.py build
I get
Running from numpy source directory.
Tr
Robert Kern wrote:
>> It would presumably require a global threading.RLock for protecting the
>> global state.
>
> We would use thread-local storage like we currently do with the
> np.errstate() context manager. Each thread will have its own "global"
> state.
That sounds like a better plan, yes
On Sat, Jul 26, 2014 at 8:04 PM, Sturla Molden wrote:
> Benjamin Root wrote:
>
>> My other concern would be with multi-threaded code (which is where a global
>> state would be bad).
>
> It would presumably require a global threading.RLock for protecting the
> global state.
We would use thread-lo
I completely agree with Eelco. I expect numpy.mean to do something
simple and straightforward. If the naive method is not well suited for
my data, I can deal with it and have my own ad hoc method.
On Sat, Jul 26, 2014 at 3:19 PM, Eelco Hoogendoorn
wrote:
> Perhaps I in turn am missing something;
wrote:
> statsmodels still has avoided anything that smells like a global state that
> changes calculation.
If global states are stored in a stack, as in OpenGL, it is not so bad. A
context manager could push a state in __enter__ and pop the state in
__exit__. This is actually how I write OpenGL
Perhaps I in turn am missing something; but I would suppose that any
algorithm that requires multiple passes over the data is off the table?
Perhaps I am being a little old fashioned and performance oriented here,
but to make the ultra-majority of use cases suffer a factor two performance
penalty f
Benjamin Root wrote:
> My other concern would be with multi-threaded code (which is where a global
> state would be bad).
It would presumably require a global threading.RLock for protecting the
global state.
Sturla
___
NumPy-Discussion mailing list
On Sat, Jul 26, 2014 at 2:44 PM, Benjamin Root wrote:
> That is one way of doing it, and probably the cleanest way. Or else you
> have to pass in the context object everywhere anyway. But I am not so
> concerned about that (we do that for other things as well). Bigger concerns
> would be nested c
That is one way of doing it, and probably the cleanest way. Or else you
have to pass in the context object everywhere anyway. But I am not so
concerned about that (we do that for other things as well). Bigger concerns
would be nested contexts. For example, what if one of the scikit functions
use su
On Sat, Jul 26, 2014 at 9:57 AM, Benjamin Root wrote:
> I could get behind the context manager approach. It would help keep
> backwards compatibility, while providing a very easy (and clean) way of
> consistently using the same reduction operation. Adding kwargs is just a
> road to hell.
>
Would
Sturla Molden wrote:
> Sebastian Berg wrote:
>
>> Yes, it is much more complicated and incompatible with naive ufuncs if
>> you want your memory access to be optimized. And optimizing that is very
>> much worth it speed wise...
>
> Why? Couldn't we just copy the data chunk-wise to a temporary b
Sebastian Berg wrote:
> Yes, it is much more complicated and incompatible with naive ufuncs if
> you want your memory access to be optimized. And optimizing that is very
> much worth it speed wise...
Why? Couldn't we just copy the data chunk-wise to a temporary buffer of say
2**13 numbers and th
A context manager makes sense.
I very much appreciate the time constraints and the effort put in this far,
but if we can not make something work uniformly, I wonder if we should
include it in the master at all. I don't have a problem with customizing
algorithms where fp accuracy demands it; I have
On Sa, 2014-07-26 at 15:38 +0200, Eelco Hoogendoorn wrote:
> I was wondering the same thing. Are there any known tradeoffs to this
> method of reduction?
>
Yes, it is much more complicated and incompatible with naive ufuncs if
you want your memory access to be optimized. And optimizing that is ve
I could get behind the context manager approach. It would help keep
backwards compatibility, while providing a very easy (and clean) way of
consistently using the same reduction operation. Adding kwargs is just a
road to hell.
Cheers!
Ben Root
On Sat, Jul 26, 2014 at 9:53 AM, Julian Taylor <
jta
On 26.07.2014 15:38, Eelco Hoogendoorn wrote:
>
> Why is it not always used?
for 1d reduction the iterator blocks by 8192 elements even when no
buffering is required. There is a TODO in the source to fix that by
adding additional checks. Unfortunately nobody knows hat these
additional tests would
I was wondering the same thing. Are there any known tradeoffs to this
method of reduction?
On Sat, Jul 26, 2014 at 12:39 PM, Sturla Molden
wrote:
> Sebastian Berg wrote:
>
> > chose more stable algorithms for such statistical functions. The
> > pairwise summation that is in master now is very
Sebastian Berg wrote:
> chose more stable algorithms for such statistical functions. The
> pairwise summation that is in master now is very awesome, but it is not
> secure enough in the sense that a new user will have difficulty
> understanding when he can be sure it is used.
Why is it not alway
On Fr, 2014-07-25 at 21:23 +0200, Eelco Hoogendoorn wrote:
> It need not be exactly representable as such; take the mean of [1, 1
> +eps] for instance. Granted, there are at most two number in the range
> of the original dtype which are closest to the true mean; but im not
> sure that computing the
Cool, sounds like great improvements. I can imagine that after some loop
unrolling one becomes memory bound pretty soon. Is the summation guaranteed to
traverse the data in its natural order? And do you happen to know what the
rules for choosing accumulator dtypes are?
-Original Message---
On Sat, Jul 26, 2014 at 9:19 AM, Lars Buitinck wrote:
>> Date: Fri, 25 Jul 2014 15:06:40 +0200
>> From: Olivier Grisel
>> Subject: Re: [Numpy-discussion] change default integer from int32 to
>> int64 on win64?
>> To: Discussion of Numerical Python
>> Content-Type: text/plain; charset=U
> Date: Fri, 25 Jul 2014 15:06:40 +0200
> From: Olivier Grisel
> Subject: Re: [Numpy-discussion] change default integer from int32 to
> int64 on win64?
> To: Discussion of Numerical Python
> Content-Type: text/plain; charset=UTF-8
>
> The dtype returned by np.where looks right (int64):
23 matches
Mail list logo