[Numpy-discussion] Documentation Team meeting - Monday June 8th

2020-06-05 Thread Melissa Mendonça
Hi all!

A reminder that on Monday, June 8, we have another documentation team
meeting at 3PM UTC**. If you wish to join on Zoom, you need to use this
link

https://zoom.us/j/420005230

Here's the permanent hackmd document with the meeting notes:

https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg


Hope to see you around (especially if you want to introduce yourself or
discuss ideas for Google Season of Docs).

** You can click this link to get the correct time at your timezone:
https://www.timeanddate.com/worldclock/fixedtime.html?msg=NumPy+Documentation+Team+Meeting&iso=20200608T15&p1=1440&ah=1

- Melissa
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Call for expertise: Blocked iteration

2020-06-05 Thread Sebastian Berg
Hi all,

I am curious about exploring whether or not we could add simple blocked
iteration to NumPy.
It seems like a long standing small deficiency in NumPy that we do not
support blocked iteration.  I do not know how much speed gain we would
actually have in real world code, but I assume some bad-memory-order
copies could be drastically faster.

Implementing blocked iteration for NumPy seems pretty complicated on
first sight due to the complexity of the iterator and the fact that
almost no-one knows the code well or touches it regularly.
But, the actual complexity to add a new iteration mode to it is
probably not forbiddingly high.

First, we need to (quickly) find the cases where blocked iteration
makes sense, and then, if it does, store whatever additional metadata
is necessary.
Second, we need to provide a newly implemented `iternext` function.

The first chunk, seems like it can be done in its own function and
should be fairly straight forward to do after most of the iterator
setup is done.
While the second part is already how the iterator is designed.

It would be helpful to have someone with some expertise/brackground in
this type of thing to be able discuss trade-offs and quickly see what
the main goals should be and whether/where significant performance
increases are likely.

There are some things that may end up being complicated. For example,
if we would want to support reductions/broadcasting. However, it may
well be that there is no reason to attack those more complex cases,
because the largest gains are expected elsewhere in any case.

I would be extremely happy if anyone with the necessary background is
interested in giving this challenge a try.  I can help with the NumPy-
API side, and code review and would be available for chatting/helping
with the NumPy side. I do not have the bandwidth to actually dive into
this for real though.

Cheers,

Sebastian


[1] That is the complexity concerning the NumPy API. I do not know how
complex a blocked iterator itself is.



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] introducing autoreg and autoregnn

2020-06-05 Thread rondall jones
Hello! I have supported constrained solvers for linear matrix problems for 
about 10 years in C++, but have now switched to Python. I am going to submit a 
couple of new routines for linalg called autoreg(A,b) and autoregnn(A,b). They 
work just like lstsq(A,b) normally, but when they detect that the problem is 
dominated by noise they revert to an automatic regularization scheme that 
returns a better behaved result than one gets from lstsq. In addition, 
autoregnn enforces a nonnegativity constraint on the solution. I have put on my 
web site a slightly fuller featured version of these same two algorithms, using 
a Class implementation to facilitate retuning several diagnostic or other 
artifacts. The web site contains tutorials on these methods and a number of 
examples of their use. See http://www.rejones7.net/autorej/ . I hope this 
community can take a look at these routines and see whether they are 
appropriate for linalg or should be in another location. Ron Jones


Sent from Mail for Windows 10

___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy FFT normalization options issue (addition of new option)

2020-06-05 Thread Ross Barnowski
On Thu, Jun 4, 2020 at 7:05 AM cvav  wrote:

> Issue #16126: https://github.com/numpy/numpy/issues/16126
> PR #16476: https://github.com/numpy/numpy/pull/16476
> numpy.fft docs (v1.18):
> https://numpy.org/doc/1.18/reference/routines.fft.html
>
> Hello all,
> I was advised to write on the numpy mailing list, after this week's
> community meeting led to some general discussions on the normalization
> schemes used in the FFT functions.
>
> My post has to do with issue #16126, which asks for the addition of a new
> option for the "norm" argument for the FFT functions; "norm" controls the
> way the forward (direct) and backward (inverse) transforms are normalized,
> and the two currently supported options are "norm=None" (default) and
> "norm=ortho". The "ortho" option uses the orthonormal Fourier basis
> functions, which translates to both the forward and backward transforms
> being scaled by 1/sqrt(n), where n is the number of Fourier modes (and data
> points). The default "None" option scales the forward transform by 1
> (unscaled) and the backward by 1/n.
>
> The new added option, called for now "norm=inverse", is the exact opposite
> of the default option; i.e. it scales the forward transform by 1/n and the
> backward by 1. In terms of using the FFT for spectral methods or
> approximation problems, these are the three scaling schemes one encounters;
> the transform itself is the same, with only a constant factor being the
> difference. But having all three scaling options built in the fft and ifft
> functions makes the code cleaner and it's easier to stay consistent.
>
> I've submitted a PR for this change, and would be happy to get comments and
> feedback on the implementation and anything else that hasn't been
> considered. Thanks!
>
> Chris
>
> This seems reasonable; in fact the addition of this normalization option
was discussed in #2142, but there doesn't seem to have been a compelling
use-case at that time.

One potential issue that stood out to me was the name of the new keyword
option. At face value, "inverse" seems like a poor choice given the
established use of the term in Fourier analysis. For example, one might
expect `norm="inverse"` to mean that scaling is applied to the ifft, when
this is actually the current default.

Ross Barnowski

>
> --
> Sent from: http://numpy-discussion.10968.n7.nabble.com/
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy FFT normalization options issue (addition of new option)

2020-06-05 Thread cvav
Ross Barnowski wrote
> One potential issue that stood out to me was the name of the new keyword
> option. At face value, "inverse" seems like a poor choice given the
> established use of the term in Fourier analysis. For example, one might
> expect `norm="inverse"` to mean that scaling is applied to the ifft, when
> this is actually the current default.

Yes that's true, the keyword argument name "inverse" is certainly something
I don't feel sure about. It'd be nice if everyone interested could suggest
names that make sense to them and what's their rationale behind them, so
that we pick something that's as self explanatory as possible. 

My thinking was to indicate that it's the opposite scaling to the default
option "None", so maybe something like "opposite" or "reversed" could be
other choices. Otherwise, we can find something that directly describes the
scaling and not its relationship to the default option.   

Chris



--
Sent from: http://numpy-discussion.10968.n7.nabble.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] introducing autoreg and autoregnn

2020-06-05 Thread Ralf Gommers
On Fri, Jun 5, 2020 at 9:48 PM rondall jones  wrote:

> Hello! I have supported constrained solvers for linear matrix problems for
> about 10 years in C++, but have now switched to Python. I am going to
> submit a couple of new routines for linalg called autoreg(A,b) and
> autoregnn(A,b). They work just like lstsq(A,b) normally, but when they
> detect that the problem is dominated by noise they revert to an automatic
> regularization scheme that returns a better behaved result than one gets
> from lstsq. In addition, autoregnn enforces a nonnegativity constraint on
> the solution. I have put on my web site a slightly fuller featured version
> of these same two algorithms, using a Class implementation to facilitate
> retuning several diagnostic or other artifacts. The web site contains
> tutorials on these methods and a number of examples of their use. See
> http://www.rejones7.net/autorej/ . I hope this community can take a look
> at these routines and see whether they are appropriate for linalg or should
> be in another location.
>

Hi Ron, thanks for proposing this. It seems out of scope for NumPy;
scipy.linalg or scipy.optimize seem like the most obvious candidates.

If you propose inclusion into SciPy, it would be good to discuss whether
the algorithm is based on a publication showing usage via citation stats or
some other way. There's more details at
http://scipy.github.io/devdocs/dev/core-dev/index.html#deciding-on-new-features

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] introducing autoreg and autoregnn

2020-06-05 Thread Joseph Fox-Rabinovitz
Rondall,

Are you familiar with the lmfit project? I am not an expert, but it seems
like your algorithms may be useful there. I recommend checking with Matt
Newville via the mailing list.

Regards,

Joe


On Fri, Jun 5, 2020, 17:00 Ralf Gommers  wrote:

>
>
> On Fri, Jun 5, 2020 at 9:48 PM rondall jones  wrote:
>
>> Hello! I have supported constrained solvers for linear matrix problems
>> for about 10 years in C++, but have now switched to Python. I am going to
>> submit a couple of new routines for linalg called autoreg(A,b) and
>> autoregnn(A,b). They work just like lstsq(A,b) normally, but when they
>> detect that the problem is dominated by noise they revert to an automatic
>> regularization scheme that returns a better behaved result than one gets
>> from lstsq. In addition, autoregnn enforces a nonnegativity constraint on
>> the solution. I have put on my web site a slightly fuller featured version
>> of these same two algorithms, using a Class implementation to facilitate
>> retuning several diagnostic or other artifacts. The web site contains
>> tutorials on these methods and a number of examples of their use. See
>> http://www.rejones7.net/autorej/ . I hope this community can take a look
>> at these routines and see whether they are appropriate for linalg or should
>> be in another location.
>>
>
> Hi Ron, thanks for proposing this. It seems out of scope for NumPy;
> scipy.linalg or scipy.optimize seem like the most obvious candidates.
>
> If you propose inclusion into SciPy, it would be good to discuss whether
> the algorithm is based on a publication showing usage via citation stats or
> some other way. There's more details at
> http://scipy.github.io/devdocs/dev/core-dev/index.html#deciding-on-new-features
>
> Cheers,
> Ralf
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion