Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:

> So I guess what makes this tricky is that:
>
> - We want the behavior to the same for multiple-element arrays,
> single-element arrays, zero-dimensional arrays, and scalars -- the
> shape of the data shouldn't affect the semantics of **
>
> - We also want the numpy scalar behavior to match the Python scalar
> behavior
>
> - For Python scalars, int ** (positive int) returns an int, but int **
> (negative int) returns a float.
>
> - For arrays, int ** (positive int) and int ** (negative int) _have_
> to return the same type, because in general output types are always a
> function of the input types and *can't* look at the specific values
> involved, and in specific because if you do array([2, 3]) ** array([2,
> -2]) you can't return an array where the first element is int and the
> second is float.
>
> Given these immutable and contradictory constraints, the last bad
> option IMHO would be that we make int ** (negative int) an error in
> all cases, and the error message can suggest that instead of writing
>
> np.array(2) ** -2
>
> they should instead write
>
> np.array(2) ** -2.0
>
> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>
> Definitely annoying, but all the other options seem even more
> inconsistent and confusing, and likely to encourage the writing of
> subtly buggy code...
>
> (I especially have in mind numpy's habit of silently switching between
> scalars and zero-dimensional arrays -- so it's easy to write code that
> you think handles arbitrary array dimensions, and it even passes all
> your tests, but then it fails when someone passes in a different shape
> data and triggers some scalar/array inconsistency. E.g. if we make **
> -2 work for scalars but not arrays, then this code:
>
> def f(arr):
> return np.sum(arr, axis=0) ** -2
>
> works as expected for 1-d input, tests pass, everyone's happy... but
> as soon as you try to pass in higher dimensional integer input it will
> fail.)
>
>
Hmm, the Alexandrian solution. The main difficulty with this solution that
this will likely to break working code. We could try it, or take the safe
route of raising a (Visible)DeprecationWarning. The other option is to
simply treat the negative power case uniformly as floor division and raise
an error on zero division, but the difference from Python power would be
highly confusing. I think I would vote for the second option with a
DeprecationWarning.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Fri, May 20, 2016 at 12:35 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:

>
>
> On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:
>
>> So I guess what makes this tricky is that:
>>
>> - We want the behavior to the same for multiple-element arrays,
>> single-element arrays, zero-dimensional arrays, and scalars -- the
>> shape of the data shouldn't affect the semantics of **
>>
>> - We also want the numpy scalar behavior to match the Python scalar
>> behavior
>>
>> - For Python scalars, int ** (positive int) returns an int, but int **
>> (negative int) returns a float.
>>
>> - For arrays, int ** (positive int) and int ** (negative int) _have_
>> to return the same type, because in general output types are always a
>> function of the input types and *can't* look at the specific values
>> involved, and in specific because if you do array([2, 3]) ** array([2,
>> -2]) you can't return an array where the first element is int and the
>> second is float.
>>
>> Given these immutable and contradictory constraints, the last bad
>> option IMHO would be that we make int ** (negative int) an error in
>> all cases, and the error message can suggest that instead of writing
>>
>> np.array(2) ** -2
>>
>> they should instead write
>>
>> np.array(2) ** -2.0
>>
>> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>>
>> Definitely annoying, but all the other options seem even more
>> inconsistent and confusing, and likely to encourage the writing of
>> subtly buggy code...
>>
>> (I especially have in mind numpy's habit of silently switching between
>> scalars and zero-dimensional arrays -- so it's easy to write code that
>> you think handles arbitrary array dimensions, and it even passes all
>> your tests, but then it fails when someone passes in a different shape
>> data and triggers some scalar/array inconsistency. E.g. if we make **
>> -2 work for scalars but not arrays, then this code:
>>
>> def f(arr):
>> return np.sum(arr, axis=0) ** -2
>>
>> works as expected for 1-d input, tests pass, everyone's happy... but
>> as soon as you try to pass in higher dimensional integer input it will
>> fail.)
>>
>>
> Hmm, the Alexandrian solution. The main difficulty with this solution that
> this will likely to break working code. We could try it, or take the safe
> route of raising a (Visible)DeprecationWarning. The other option is to
> simply treat the negative power case uniformly as floor division and raise
> an error on zero division, but the difference from Python power would be
> highly confusing. I think I would vote for the second option with a
> DeprecationWarning.
>
>
I suspect that the different behavior of int64 on my system is due to
inheritance from Python 2.7 int

In [1]: isinstance(int64(1), int)
Out[1]: True

That different behavior is also carried over for Python 3.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Nathaniel Smith
On Fri, May 20, 2016 at 11:35 AM, Charles R Harris
 wrote:
>
>
> On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:
>>
>> So I guess what makes this tricky is that:
>>
>> - We want the behavior to the same for multiple-element arrays,
>> single-element arrays, zero-dimensional arrays, and scalars -- the
>> shape of the data shouldn't affect the semantics of **
>>
>> - We also want the numpy scalar behavior to match the Python scalar
>> behavior
>>
>> - For Python scalars, int ** (positive int) returns an int, but int **
>> (negative int) returns a float.
>>
>> - For arrays, int ** (positive int) and int ** (negative int) _have_
>> to return the same type, because in general output types are always a
>> function of the input types and *can't* look at the specific values
>> involved, and in specific because if you do array([2, 3]) ** array([2,
>> -2]) you can't return an array where the first element is int and the
>> second is float.
>>
>> Given these immutable and contradictory constraints, the last bad
>> option IMHO would be that we make int ** (negative int) an error in
>> all cases, and the error message can suggest that instead of writing
>>
>> np.array(2) ** -2
>>
>> they should instead write
>>
>> np.array(2) ** -2.0
>>
>> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>>
>> Definitely annoying, but all the other options seem even more
>> inconsistent and confusing, and likely to encourage the writing of
>> subtly buggy code...
>>
>> (I especially have in mind numpy's habit of silently switching between
>> scalars and zero-dimensional arrays -- so it's easy to write code that
>> you think handles arbitrary array dimensions, and it even passes all
>> your tests, but then it fails when someone passes in a different shape
>> data and triggers some scalar/array inconsistency. E.g. if we make **
>> -2 work for scalars but not arrays, then this code:
>>
>> def f(arr):
>> return np.sum(arr, axis=0) ** -2
>>
>> works as expected for 1-d input, tests pass, everyone's happy... but
>> as soon as you try to pass in higher dimensional integer input it will
>> fail.)
>>
>
> Hmm, the Alexandrian solution. The main difficulty with this solution that
> this will likely to break working code. We could try it, or take the safe
> route of raising a (Visible)DeprecationWarning.

Right, sorry, I was talking about the end goal -- there's a separate
question of how we get there. Pretty much any solution is going to
require some sort of deprecation cycle though I guess, and at least
the deprecate -> error transition is a lot easier than the working ->
working different transition.

> The other option is to
> simply treat the negative power case uniformly as floor division and raise
> an error on zero division, but the difference from Python power would be
> highly confusing. I think I would vote for the second option with a
> DeprecationWarning.

So "floor division" here would mean that k ** -n == 0 for all k and n
except for k == 1, right? In addition to the consistency issue, that
doesn't seem like a behavior that's very useful to anyone...

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Fri, May 20, 2016 at 1:15 PM, Nathaniel Smith  wrote:

> On Fri, May 20, 2016 at 11:35 AM, Charles R Harris
>  wrote:
> >
> >
> > On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:
> >>
> >> So I guess what makes this tricky is that:
> >>
> >> - We want the behavior to the same for multiple-element arrays,
> >> single-element arrays, zero-dimensional arrays, and scalars -- the
> >> shape of the data shouldn't affect the semantics of **
> >>
> >> - We also want the numpy scalar behavior to match the Python scalar
> >> behavior
> >>
> >> - For Python scalars, int ** (positive int) returns an int, but int **
> >> (negative int) returns a float.
> >>
> >> - For arrays, int ** (positive int) and int ** (negative int) _have_
> >> to return the same type, because in general output types are always a
> >> function of the input types and *can't* look at the specific values
> >> involved, and in specific because if you do array([2, 3]) ** array([2,
> >> -2]) you can't return an array where the first element is int and the
> >> second is float.
> >>
> >> Given these immutable and contradictory constraints, the last bad
> >> option IMHO would be that we make int ** (negative int) an error in
> >> all cases, and the error message can suggest that instead of writing
> >>
> >> np.array(2) ** -2
> >>
> >> they should instead write
> >>
> >> np.array(2) ** -2.0
> >>
> >> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
> >>
> >> Definitely annoying, but all the other options seem even more
> >> inconsistent and confusing, and likely to encourage the writing of
> >> subtly buggy code...
> >>
> >> (I especially have in mind numpy's habit of silently switching between
> >> scalars and zero-dimensional arrays -- so it's easy to write code that
> >> you think handles arbitrary array dimensions, and it even passes all
> >> your tests, but then it fails when someone passes in a different shape
> >> data and triggers some scalar/array inconsistency. E.g. if we make **
> >> -2 work for scalars but not arrays, then this code:
> >>
> >> def f(arr):
> >> return np.sum(arr, axis=0) ** -2
> >>
> >> works as expected for 1-d input, tests pass, everyone's happy... but
> >> as soon as you try to pass in higher dimensional integer input it will
> >> fail.)
> >>
> >
> > Hmm, the Alexandrian solution. The main difficulty with this solution
> that
> > this will likely to break working code. We could try it, or take the safe
> > route of raising a (Visible)DeprecationWarning.
>
> Right, sorry, I was talking about the end goal -- there's a separate
> question of how we get there. Pretty much any solution is going to
> require some sort of deprecation cycle though I guess, and at least
> the deprecate -> error transition is a lot easier than the working ->
> working different transition.
>
> > The other option is to
> > simply treat the negative power case uniformly as floor division and
> raise
> > an error on zero division, but the difference from Python power would be
> > highly confusing. I think I would vote for the second option with a
> > DeprecationWarning.
>
> So "floor division" here would mean that k ** -n == 0 for all k and n
> except for k == 1, right? In addition to the consistency issue, that
> doesn't seem like a behavior that's very useful to anyone...
>

And -1 as well. The virtue is consistancy while deprecating. Or we could
just back out the current changes in master and throw in deprecation
warnings. That has the virtue of simplicity and not introducing possible
code breaks.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread josef.pktd
On Fri, May 20, 2016 at 3:23 PM, Charles R Harris  wrote:

>
>
> On Fri, May 20, 2016 at 1:15 PM, Nathaniel Smith  wrote:
>
>> On Fri, May 20, 2016 at 11:35 AM, Charles R Harris
>>  wrote:
>> >
>> >
>> > On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:
>> >>
>> >> So I guess what makes this tricky is that:
>> >>
>> >> - We want the behavior to the same for multiple-element arrays,
>> >> single-element arrays, zero-dimensional arrays, and scalars -- the
>> >> shape of the data shouldn't affect the semantics of **
>> >>
>> >> - We also want the numpy scalar behavior to match the Python scalar
>> >> behavior
>> >>
>> >> - For Python scalars, int ** (positive int) returns an int, but int **
>> >> (negative int) returns a float.
>> >>
>> >> - For arrays, int ** (positive int) and int ** (negative int) _have_
>> >> to return the same type, because in general output types are always a
>> >> function of the input types and *can't* look at the specific values
>> >> involved, and in specific because if you do array([2, 3]) ** array([2,
>> >> -2]) you can't return an array where the first element is int and the
>> >> second is float.
>> >>
>> >> Given these immutable and contradictory constraints, the last bad
>> >> option IMHO would be that we make int ** (negative int) an error in
>> >> all cases, and the error message can suggest that instead of writing
>> >>
>> >> np.array(2) ** -2
>> >>
>> >> they should instead write
>> >>
>> >> np.array(2) ** -2.0
>> >>
>> >> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>> >>
>> >> Definitely annoying, but all the other options seem even more
>> >> inconsistent and confusing, and likely to encourage the writing of
>> >> subtly buggy code...
>> >>
>> >> (I especially have in mind numpy's habit of silently switching between
>> >> scalars and zero-dimensional arrays -- so it's easy to write code that
>> >> you think handles arbitrary array dimensions, and it even passes all
>> >> your tests, but then it fails when someone passes in a different shape
>> >> data and triggers some scalar/array inconsistency. E.g. if we make **
>> >> -2 work for scalars but not arrays, then this code:
>> >>
>> >> def f(arr):
>> >> return np.sum(arr, axis=0) ** -2
>> >>
>> >> works as expected for 1-d input, tests pass, everyone's happy... but
>> >> as soon as you try to pass in higher dimensional integer input it will
>> >> fail.)
>> >>
>> >
>> > Hmm, the Alexandrian solution. The main difficulty with this solution
>> that
>> > this will likely to break working code. We could try it, or take the
>> safe
>> > route of raising a (Visible)DeprecationWarning.
>>
>> Right, sorry, I was talking about the end goal -- there's a separate
>> question of how we get there. Pretty much any solution is going to
>> require some sort of deprecation cycle though I guess, and at least
>> the deprecate -> error transition is a lot easier than the working ->
>> working different transition.
>>
>> > The other option is to
>> > simply treat the negative power case uniformly as floor division and
>> raise
>> > an error on zero division, but the difference from Python power would be
>> > highly confusing. I think I would vote for the second option with a
>> > DeprecationWarning.
>>
>> So "floor division" here would mean that k ** -n == 0 for all k and n
>> except for k == 1, right? In addition to the consistency issue, that
>> doesn't seem like a behavior that's very useful to anyone...
>>
>
> And -1 as well. The virtue is consistancy while deprecating. Or we could
> just back out the current changes in master and throw in deprecation
> warnings. That has the virtue of simplicity and not introducing possible
> code breaks.
>


can numpy cast to float by default for power or **?

At least then we always get correct numbers.

Are there dominant usecases that require default return dtype int?
AFAICS, it's always possible to choose the return dtype in np.power.

Josef


>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Alan Isaac

On 5/19/2016 11:30 PM, Nathaniel Smith wrote:

the last bad
option IMHO would be that we make int ** (negative int) an error in
all cases, and the error message can suggest that instead of writing

np.array(2) ** -2

they should instead write

np.array(2) ** -2.0

(And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)




Fwiw, Haskell has three exponentiation operators
because of such ambiguities.  I don't use C, but
I think the contrasting decision there was to
always return a double, which has a clear attraction
since for any fixed-width integral type, most of the
possible input pairs overflow the type.

My core inclination would be to use (what I understand to be)
the C convention that integer exponentiation always produces
a double, but to support dtype-specific exponentiation with
a function.  But this is just a user's perspective.

Cheers,
Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Warren Weckesser
On Fri, May 20, 2016 at 4:22 PM, Alan Isaac  wrote:

> On 5/19/2016 11:30 PM, Nathaniel Smith wrote:
>
>> the last bad
>> option IMHO would be that we make int ** (negative int) an error in
>> all cases, and the error message can suggest that instead of writing
>>
>> np.array(2) ** -2
>>
>> they should instead write
>>
>> np.array(2) ** -2.0
>>
>> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>>
>
>
>
> Fwiw, Haskell has three exponentiation operators
> because of such ambiguities.  I don't use C, but
> I think the contrasting decision there was to
> always return a double, which has a clear attraction
> since for any fixed-width integral type, most of the
> possible input pairs overflow the type.
>
> My core inclination would be to use (what I understand to be)
> the C convention that integer exponentiation always produces
> a double, but to support dtype-specific exponentiation with
> a function.



C doesn't have an exponentiation operator.  The C math library has pow,
powf and powl, which (like any C functions) are explicitly typed.

Warren


  But this is just a user's perspective.
>
> Cheers,
> Alan Isaac
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread josef.pktd
On Fri, May 20, 2016 at 4:27 PM, Warren Weckesser <
warren.weckes...@gmail.com> wrote:

>
>
> On Fri, May 20, 2016 at 4:22 PM, Alan Isaac  wrote:
>
>> On 5/19/2016 11:30 PM, Nathaniel Smith wrote:
>>
>>> the last bad
>>> option IMHO would be that we make int ** (negative int) an error in
>>> all cases, and the error message can suggest that instead of writing
>>>
>>> np.array(2) ** -2
>>>
>>> they should instead write
>>>
>>> np.array(2) ** -2.0
>>>
>>> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>>>
>>
>>
>>
>> Fwiw, Haskell has three exponentiation operators
>> because of such ambiguities.  I don't use C, but
>> I think the contrasting decision there was to
>> always return a double, which has a clear attraction
>> since for any fixed-width integral type, most of the
>> possible input pairs overflow the type.
>>
>> My core inclination would be to use (what I understand to be)
>> the C convention that integer exponentiation always produces
>> a double, but to support dtype-specific exponentiation with
>> a function.
>
>
>
> C doesn't have an exponentiation operator.  The C math library has pow,
> powf and powl, which (like any C functions) are explicitly typed.
>
> Warren
>
>
>   But this is just a user's perspective.
>>
>> Cheers,
>> Alan Isaac
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>

another question

uint are positive so the division problem doesn't show up. So that could
still handle a usecase for ints.

I'm getting stronger in favor of float because raising an exception (or
worse, nonsense) in half of the parameter spaces sounds ...  (maybe kind of
silly)

Josef
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Marten van Kerkwijk
Hi All,

As a final desired state, always returning float seems the best idea. It
seems quite similar to division in this regard, where integer division
works for some values, but not for all. This means not being quite
consistent with python, but as Nathan pointed out, one cannot have
value-dependent dtype's for arrays (and scalars should indeed behave the
same way).

If so, then like for division, in-place power should raise a `TypeError`.
​
Obviously, one could have a specific function for integers (like `//` for
division) for cases where it is really needed.

Now, how to get there...  Maybe we can learn from division? At least, I
guess at some point `np.divide` became equivalent to `np.true_divide`?

All the best,

Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Alan Isaac

Yes, I was referring to `pow`,
but I had in mind the C++ version,
which is overloaded:
http://www.cplusplus.com/reference/cmath/pow/

Cheers,
Alan


On 5/20/2016 4:27 PM, Warren Weckesser wrote:

C doesn't have an exponentiation operator.  The C math library has pow, powf 
and powl, which (like any C functions) are explicitly typed.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Nathaniel Smith
On May 20, 2016 12:44 PM,  wrote:
[...]
>
> can numpy cast to float by default for power or **?

Maybe? The question is whether there are any valid use cases for getting
ints back:

>>> np.array([1, 2, 3]) ** 2
array([1, 4, 9])

It's not 100% obvious to me but intuitively this seems like an operation
that we probably want to support? Especially since there's a reasonable
range of int64 values that can't be represented exactly as floats.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A numpy based Entity-Component-System

2016-05-20 Thread Elliot Hallmark
I have a Data Oriented programing library I'm writing that uses the
Entity-Component-System model.

https://github.com/Permafacture/data-oriented-pyglet

I have initially called it Numpy-ECS but I don't know if that name is
okay.  The numpy license says:

Neither the name of the NumPy Developers nor the names of any contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.

but doesn't say anything about using the name of NumPy. Is this okay?

For anyone interested in what this project is, I suggest looking at the
project README.  Briefly, a Component is the Data Oriented version of what
Object Oriented calls an attribute (a Numpy array of values that represents
an attribute that Entities might have).  An Entity is the Data Oriented
version of an object instance (something subscribed to some subset of
Components, with an ID that can be used to access each instance's values in
those components).  A System is like an object method (a function that
operates on the Components of all Entities that have that Component).

This is something that one would use in a game engine or simulation.  It
allows you to have instances, but where the logic can be bulk applied to
all involved instances by operations on slices of Numpy arrays.  In an
example shown in this video , I went from 50
FPS in an object oriented approach to using ufuncs operating on shared
memory in multiprocessing and getting 180 space steps per second on the
physics and 500-600 FPS in the rendering.

Thanks for reading,
  Elliot
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread josef.pktd
On Fri, May 20, 2016 at 6:54 PM, Nathaniel Smith  wrote:

> On May 20, 2016 12:44 PM,  wrote:
> [...]
> >
> > can numpy cast to float by default for power or **?
>
> Maybe? The question is whether there are any valid use cases for getting
> ints back:
>
> >>> np.array([1, 2, 3]) ** 2
> array([1, 4, 9])
>
> It's not 100% obvious to me but intuitively this seems like an operation
> that we probably want to support? Especially since there's a reasonable
> range of int64 values that can't be represented exactly as floats.
>

It would still be supported by the np.power function with dtype keyword for
users that want to strictly control the dtype.

The question is mainly for the operator ** which doesn't have options and
there I think it's more appropriate for users that want correct numbers but
not necessarily have or want to watch out for the dtype.


Related: Python 3.4 returns complex for (-1)**0.5 while numpy returns nan.
That's a similar case of upcasting if the result doesn't fit.

Longterm I think it would still be nice if numpy could do value dependent
type promotion. e.g. when a value is encountered that doesn't fit, then
upcast the result at the cost of possibly having to copy the existing
results.
In the current setting the user has to decide in advance what the necessary
maybe required dtype is supposed to be. (Of course I have no idea about the
technical problems or computational cost of this.)

(Julia dispatch seems to make it easier to construct new types. e.g. we
could have a flexible dtype that is free to upcast to whatever is required
for the calculation. just guessing)


practicality:
going from int to float is a common usecase and we would expect getting the
correct numbers  2**(-2) -> promote

complex is in most fields an unusual outcome for integer or float
calculations
(e.g. box-cox transformation for x>0 )  having suddenly complex numbers is
weird, getting nans is standard float response  -> don't promote


I'm still largely in the python 2.x habit of adding a decimal points to
numbers for float or a redundant `* 1.` in my code to avoid integer
division or other weirdness. So, I never realized that ** in numpy doesn't
always promote to float which I kind of thought it did.
Maybe it's not yet time to drop all the decimal points or `* 1.` from the
code?


Josef



> -n
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A numpy based Entity-Component-System

2016-05-20 Thread Nathaniel Smith
On May 20, 2016 4:24 PM, "Elliot Hallmark"  wrote:
>
> I have a Data Oriented programing library I'm writing that uses the
Entity-Component-System model.
>
> https://github.com/Permafacture/data-oriented-pyglet
>
> I have initially called it Numpy-ECS but I don't know if that name is
okay.  The numpy license says:
>
> Neither the name of the NumPy Developers nor the names of any
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
>
> but doesn't say anything about using the name of NumPy. Is this okay?

Legally speaking, I'm pretty sure no one is going to sue you. That clause
in the license is generally taken to be mostly meaningless, because lying
and claiming that so-and-so endorses my project when they don't is
generally going to get me into legal trouble anyway, so the license text is
redundant. The main thing that would legally control use of the name
"numpy" is if we had a trademark on it, and I'm pretty sure no one has
claimed or registered one of those.

But legal issues aside, it'd probably be better to give your software a
more unique name? "Numpy-ECS" is potentially confusing (are we going to get
bug reports filed on it because it says "numpy"?), and, well, it's kind of
boring and generic, don't you think? Like naming your child
"Human-legsarms"?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT and reconstruct

2016-05-20 Thread Vasco Gervasi
Maybe I found the problems;

1. t0=1.0, t1=3.0, y['1'] = cos(1.0*omega*t): I have to reconstruct the
signal using

>  yRec += a * cos(omega*i*(t-t0) + f)

not

>  yRec += a * cos(omega*i*t + f)


2.  t0=2, t1=3, y['Signal'] = 1.0*cos(1.0*omega*t) + ... +
5.0*cos(5.0*omega*t) + 1.0: starting point and end point must not be the
same, so to generate the signal I have to use

> t = linspace(t0, t1, 1000, endpoint=False)

not

> t = linspace(t0, t1, 1000)


Thanks
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion