On Fri, Nov 4, 2011 at 10:33 PM, T J wrote:
> On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith wrote:
>
>> On Fri, Nov 4, 2011 at 7:43 PM, T J wrote:
>> > On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
>> >> An acid test for proposed rules: given two arrays `a` and `b`,
>> >>
>> >>
On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 7:43 PM, T J wrote:
> > On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
> >> An acid test for proposed rules: given two arrays `a` and `b`,
> >>
> >> a = [1, 2, IGNORED(3), IGNORED(4)]
> >>b =
On Fri, Nov 4, 2011 at 7:43 PM, T J wrote:
> On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
>> An acid test for proposed rules: given two arrays `a` and `b`,
>>
>> a = [1, 2, IGNORED(3), IGNORED(4)]
>> b = [10, IGNORED(20), 30, IGNORED(40)]
[...]
> (A1) Does unmask(a+b) ==
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
> 05.11.2011 00:14, T J kirjoitti:
> [clip]
> > a = 1
> > a += 2
> > a += IGNORE
> > b = 1 + 2 + IGNORE
> >
> > I think having a == b is essential. If they can be different, that will
> > only lead to confusion. On this p
05.11.2011 00:14, T J kirjoitti:
[clip]
> a = 1
> a += 2
> a += IGNORE
> b = 1 + 2 + IGNORE
>
> I think having a == b is essential. If they can be different, that will
> only lead to confusion. On this point alone, does anyone think it is
> acceptable to have a != b?
It seems
>> Also, how does something like this get handled?
>>
> a = [1, 2, IGNORED(3), NaN]
>>
>> If I were to say, "What is the mean of 'a'?", then I think most of the time
>> people would want 1.5.
>
> I would want NaN! But that's because the only way I get NaN's is when
> I do dumb things like comp
04.11.2011 22:29, Nathaniel Smith kirjoitti:
[clip]
> Continuing my theme of looking for consensus first... there are
> obviously a ton of ugly corners in here. But my impression is that at
> least for some simple cases, it's clear what users want:
>
a = [1, IGNORED(2), 3]
> # array-with-ignor
On Fri, Nov 4, 2011 at 4:29 PM, Pauli Virtanen wrote:
> 04.11.2011 23:29, Pauli Virtanen kirjoitti:
> [clip]
> > As the definition concerns only what happens on assignment, it does not
> > have problems with commutativity.
>
> This is of course then not really true in a wider sense, as an example
04.11.2011 23:29, Pauli Virtanen kirjoitti:
[clip]
> As the definition concerns only what happens on assignment, it does not
> have problems with commutativity.
This is of course then not really true in a wider sense, as an example
from "T J" shows:
a = 1
a += IGNORE(3)
# -> a := a + IGNORE(3)
#
Hi,
I noticed this:
(Intel Mac):
In [2]: np.int32(np.float32(2**31))
Out[2]: -2147483648
(PPC):
In [3]: np.int32(np.float32(2**31))
Out[3]: 2147483647
I assume what is happening is that the casting is handing off to the c
library, and that behavior of the c library differs on these
platforms?
On Fri, Nov 4, 2011 at 3:38 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 3:08 PM, T J wrote:
> > On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith wrote:
> >> Continuing my theme of looking for consensus first... there are
> >> obviously a ton of ugly corners in here. But my impression is
04.11.2011 23:04, Nathaniel Smith kirjoitti:
[clip]
> Assuming that, I believe that what people want for IGNORED values is
>unop(SPECIAL_1) == SPECIAL_1
> which doesn't seem to be an option in your taxonomy.
Well, you can always add a new branch for rules on what to do with unary
ops.
[clip]
On Fri, Nov 4, 2011 at 3:08 PM, T J wrote:
> On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith wrote:
>> Continuing my theme of looking for consensus first... there are
>> obviously a ton of ugly corners in here. But my impression is that at
>> least for some simple cases, it's clear what users wan
04.11.2011 22:57, T J kirjoitti:
[clip]
> > (m) mark-ignored
> >
> > a := SPECIAL_1
> > # -> a == SPECIAL_a ; the payload of the RHS is neglected,
> > # the assigned value has the original LHS
> > # as the payload
[clip]
> Does this behave as expect
On Fri, Nov 4, 2011 at 3:04 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen wrote:
>> If classified this way, behaviour of items in np.ma arrays is different
>> in different operations, but seems roughly PdX, where X stands for
>> returning a masked value with the fir
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 1:22 PM, T J wrote:
> > I agree that it would be ideal if the default were to skip IGNORED
> values,
> > but that behavior seems inconsistent with its propagation properties
> (such
> > as when adding arrays with IGN
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen wrote:
> I have a feeling that if you don't start by mathematically defining the
> scalar operations first, and only after that generalize them to arrays,
> some conceptual problems may follow.
>
> On the other hand, I should note that numpy.ma does
On Fri, Nov 4, 2011 at 2:41 PM, Pauli Virtanen wrote:
> 04.11.2011 20:49, T J kirjoitti:
> [clip]
> > To push this forward a bit, can I propose that IGNORE behave as: PnC
>
> The *n* classes can be a bit confusing in Python:
>
> ### PnC
>
> >>> x = np.array([1, 2, 3])
> >>> y = np.array([4, 5
04.11.2011 20:49, T J kirjoitti:
[clip]
> To push this forward a bit, can I propose that IGNORE behave as: PnC
The *n* classes can be a bit confusing in Python:
### PnC
>>> x = np.array([1, 2, 3])
>>> y = np.array([4, 5, 6])
>>> ignore(y[1])
>>> z = x + y
>>> z
np.array([5, IGNORE(7), 9])
On Fri, Nov 4, 2011 at 1:22 PM, T J wrote:
> I agree that it would be ideal if the default were to skip IGNORED values,
> but that behavior seems inconsistent with its propagation properties (such
> as when adding arrays with IGNORED values). To illustrate, when we did
> "x+2", we were stating th
On Fri, Nov 4, 2011 at 1:59 PM, Pauli Virtanen wrote:
>
> For shorthand, we can refer to the above choices with the nomenclature
>
> ::=
> ::= "P" | "N"
> ::= "d" | "n" | "s"
> ::= "S" | "E" | "C"
>
>
I really like this problem formulation and description. Can we all agree
to
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman
wrote:
To push this forward a bit, can I propose that IGNORE behave
as: PnC
>>> x = np.array([1, 2, 3])
>>> y = np.array([10, 20, 30])
>>> ignore(x[2])
>>> x
[1, IGNORED(2), 3]
>>> x + 2
[3,
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman
wrote:
>
> To push this forward a bit, can I propose that IGNORE behave as: PnC
>>
>> >>> x = np.array([1, 2, 3])
>> >>> y = np.array([10, 20, 30])
>> >>> ignore(x[2])
>> >>> x
>> [1, IGNORED(2), 3]
>> >>> x + 2
>> [3, IGNORED(4), 5]
>> >>> x + y
>>
> NAN and NA apparently fall into the PdS class.
>
Here is where I think we need ot be a bit more careful. It is true that we want
NAN and MISSING to propagate, but then we additionally want to ignore it
sometimes. This is precisely why we have functions like nansum. Although
people
are wel
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen wrote:
>
> I have a feeling that if you don't start by mathematically defining the
> scalar operations first, and only after that generalize them to arrays,
> some conceptual problems may follow.
>
Yes. I was going to mention this point as well.
>
04.11.2011 19:59, Pauli Virtanen kirjoitti:
[clip]
> This makes inline binary ops
> behave like Nn. Reductions are N. (Assignment: dC, reductions: N, binary
> ops: PX, unary ops: PC, inline binary ops: Nn).
Sorry, inline binary ops are also PdX, not Nn.
--
Pauli Virtanen
___
04.11.2011 17:31, Gary Strangman kirjoitti:
[clip]
> The question does still remain what to do when performing operations like
> those above in IGNORE cases. Perform the operation "underneath"? Or not?
I have a feeling that if you don't start by mathematically defining the
scalar operations first
For np.gradient(), one can specify a sample distance for each axis to apply
to the gradient. But, all this does is just divides the gradient by the
sample distance. I could easily do that myself with the output from
gradient. Wouldn't it be more valuable to be able to specify the width of
the ce
Benjamin Root writes:
> On Fri, Nov 4, 2011 at 11:08 AM, Lluís wrote:
> Gary Strangman writes:
> [...]
>> destructive + non-propagating = the data point is truly missing, this is the
>> nature of that data point, such missingness should be replicated in
>> elementwise
>> operations,
On Fri, Nov 4, 2011 at 11:08 AM, Lluís wrote:
Gary Strangman writes:
[...]
> destructive + non-propagating = the data point is truly
missing, this is the
> nature of that data point, such missingness should be
replicated in elementwise
> operations, bu
>> destructive + propagating = the data point is truly missing (satellite fell
>> into
>> the ocean; dog ate my source datasheet, or whatever), this is the nature of
>> that
>> data point, such missingness should be replicated in elementwise operations,
>> and
>> the missingness SHOULD interfer
On Fri, Nov 4, 2011 at 11:08 AM, Lluís wrote:
> Gary Strangman writes:
> [...]
>
> > destructive + non-propagating = the data point is truly missing, this is
> the
> > nature of that data point, such missingness should be replicated in
> elementwise
> > operations, but such missingness should NOT
On Fri, Nov 4, 2011 at 5:26 AM, Pierre GM wrote:
>
> On Nov 03, 2011, at 23:07 , Joe Kington wrote:
>
> > I'm not sure if this is exactly a bug, per se, but it's a very confusing
> consequence of the current design of masked arrays…
> I would just add a "I think" between the "but" and "it's" befo
Gary Strangman writes:
[...]
> Given I'm still fuzzy on all the distinctions, perhaps someone could try to
> help
> me (and others?) to define all /4/ logical possibilities ... some may be
> obvious
> dead-ends. I'll take a stab at them, but these should definitely get edited by
> others:
> dest
On Fri, 4 Nov 2011, Benjamin Root wrote:
On Friday, November 4, 2011, Gary Strangman
wrote:
>
>> > non-destructive+propagating -- it really depends on exactly what
>> > computations you want to perform, and how you expect them to work. The
>> > main difference is how reduction operations are t
On Friday, November 4, 2011, Gary Strangman
wrote:
>
>> > non-destructive+propagating -- it really depends on exactly what
>> > computations you want to perform, and how you expect them to work. The
>> > main difference is how reduction operations are treated. I kind of
>> > feel like the non-prop
Gary Strangman writes:
> For the non-destructive+propagating case, do I understand correctly that
> this would mean I (as a user) could temporarily decide to IGNORE certain
> portions of my data, perform a series of computation on that data, and the
> IGNORED flag (or however it is implemented)
> non-destructive+propagating -- it really depends on exactly what
> computations you want to perform, and how you expect them to work. The
> main difference is how reduction operations are treated. I kind of
> feel like the non-propagating version makes more sense overall, but I
> don't know if
On Nov 03, 2011, at 23:07 , Joe Kington wrote:
> I'm not sure if this is exactly a bug, per se, but it's a very confusing
> consequence of the current design of masked arrays…
I would just add a "I think" between the "but" and "it's" before I could agree.
> Consider the following example:
>
>
39 matches
Mail list logo