Hi Martin,
I agree it is a long-standing issue, and I was reminded of it by your
comment. I have a draft PR at https://github.com/numpy/numpy/pull/25476
that does not change the old behaviour, but allows you to pass in a
start-stop array which behaves more sensibly (exact API TBD).
Please have a
Hi Sebastian,
> That looks nice, I don't have a clear feeling on the order of items, if
> we think of it in terms of `(start, stop)` there was also the idea
> voiced to simply add another name in which case you would allow start
> and stop to be separate arrays.
Yes, one could add another method.
Hi All,
Thanks for the comments on complex sign - it seems there is good support
for it.
On copysign, currently it is not supported for complex values at all. I
think given the responses so far, it looks like we should just keep it
like that; although my extension was fairly logical, I cannot sa
Hi All,
I have a PR [1] that adds `np.matvec` and `np.vecmat` gufuncs for
matrix-vector and vector-matrix calculations, to add to plain
matrix-matrix multiplication with `np.matmul` and the inner vector
product with `np.vecdot`. They call BLAS where possible for speed.
I'd like to hear whether th
> For dot product I can convince myself this is a math definition thing and
> accept the
> conjugation. But for "vecmat" why the complex conjugate of the vector? Are we
> assuming that
> 1D things are always columns. I am also a bit lost on the difference of dot,
> vdot and vecdot.
>
> Also if _
>> I also note that for complex numbers, `vecmat` is defined as `x†A`,
>> i.e., the complex conjugate of the vector is taken. This seems to be the
>> standard and is what we used for `vecdot` too (`x†x`). However, it is
>> *not* what `matmul` does for vector-matrix or indeed vector-vector
>> produc
> I can understand the desire to generalise the idea of matrix
> multiplication for when the arrays are not both 2-D but taking the
> complex conjugate makes absolutely no sense in the context of matrix
> multiplication.
>
> You note above that "vecmat is defined as x†A" but my interpretation
> of
> Why do these belong in NumPy? What is the broad field of application of these
> functions? And,
> does a more general concept underpin them?
Multiplication of a matrix with a vector is about as common as matrix
with matrix or vector with vector, and not currently easy to do for
stacks of vector
> FWIW, +1 for matvec & vecmat to complement matmat (erm, matmul). Having a
> binop where one argument is a matrix and the other is a
> stack/batch of vectors is indeed awkward otherwise, and a dedicated function
> to clearly distinguish "two matrices" from "a matrix and a
> batch of vectors" sou
> Could you please offer some code or math notation to help communicate this?
> I am forced to guess at the need.
>
> The words "matrix" and "vector" are ambiguous.
> After all, matrices (of given shape) are a type of vector (i.e., can be added
> and scaled.)
> So if by "matrix" you mean "2d array
Hi Alan,
The problem with .dot is not that it is not possible, but more that it
is not obvious exactly what will happen given the overloading of
multiple use cases; indeed, this is why `np.matmul` was created. For
the stacks of vectors case in particular, it is surprising that the
vector dimensio
> I tend to agree with not using the complex conjugate for vecmat, but would
> prefer having
> separate functions for that that make it explicit in the name. I also note
> that mathematicians
> use sesquilinear forms, which have the vector conjugate on the other side, so
> there are
> different
> For my own work, I required the intersect1d function to work on multiple
> arrays while returning the indices (using `return_indizes=True`).
> Consequently I changed the function in numpy and now I am seeking
> feedback from the community.
>
> This is the corresponding PR: https://github.com/n
Hi Oyibo,
> I'm proposing the introduction of a `pipe` method for NumPy arrays to enhance
> their usability and expressiveness.
I think it is an interesting idea, but agree with Robert that it is
unlikely to fly on its own. Part of the logic of even frowning on
methods like .mean() and .sum() i
> What were your conclusions after experimenting with chained ufuncs?
>
> If the speed is comparable to numexpr, wouldn’t it be `nicer` to have
> non-string input format?
>
> It would feel a bit less like a black-box.
I haven't gotten further than it yet, it is just some toying around I've
been
> From my experience, calling methods is generally faster than
> functions. I figure it is due to having less overhead figuring out the
> input. Maybe it is not significant for large data, but it does make a
> difference even when working for medium sized arrays - say float size
> 5000.
>
> %timei
> One more thing to mention on this topic.
>
> From a certain size dot product becomes faster than sum (due to
> parallelisation I guess?).
>
> E.g.
> def dotsum(arr):
> a = arr.reshape(1000, 100)
> return a.dot(np.ones(100)).sum()
>
> a = np.ones(10)
>
> In [45]: %timeit np.add.reduce
> Also, can’t get __array_wrap__ to work. The arguments it receives after
> __iadd__ are all
> post-operation. Decided not to do it this way this time so not to hardcode
> such functionality
> into the class, but if there is a way to robustly achieve this it would be
> good to know.
It is non-t
Hi All,
I agree with Dan that the actual contributions to the documentation are
of little value: it is not easy to write good documentation, with
examples that show not just the mechnanics but the purpose of the
function, i.e., go well beyond just showing some random inputs and
outputs. And poorl
I think a NEP is a good idea. It would also seem to make sense to
consider how the dtype itself can hold/calculate this type of
information, since that will be the only way a generic ``info()``
function can get information for a user-defined dtype. Indeed, taking
that further, might a method or p
Hi Olivier,
Thanks for bringing this up here! As stated by Chuck in the issue
leading to the PR,
https://github.com/numpy/numpy/issues/26401
there are real risks with using natural coefficients for polynomials:
we really want to move people away from situations in which large
cancellation error
Hi All,
Following Nathaniel's request, I have made a PR that changes the
original NEP to describe the current implementation.
* PR at https://github.com/charris/numpy/pull/9
* Rendered relevant page at
http://www.astro.utoronto.ca/~mhvk/numpy-doc/neps/ufunc-overrides.html
It may still be somewhat
Hi Nathan,
That is a good point: Yes, one can leave __array_prepare__ and
__array_wrap__ in place: only for ufuncs will they be ignored if
__array_ufunc__ is present; __array_wrap__ in particular will still be
used by quite a lot of other numpy functions (other use of
__array_prepare__ is usually
Discussion on-going at the above issue, but perhaps worth mentioning
more broadly the alternative of adding a slice argument (or start,
stop, step arguments) to ufunc.reduce, which would mean we can just
deprecate reduceat altogether, as most use of it would just be
add.reduce(array, slice=slice(i
Hi All,
I think Nathaniel had a good summary. My own 2¢ are mostly about the
burden of supporting python2. I have only recently attempted to make
changes in the C codebase of numpy and one of the reasons I found this
more than a little daunting is the complex web of include files. In
this respect,
> I suggest a new data type 'text[encoding]', 'T'.
I like the suggestion very much (it is even in between S and U!). The
utf-8 manifesto linked to above convinced me that the number that
should follow is the number of bytes, which is nicely consistent with
use in all numerical dtypes.
Any way, m
Hi All,
Do indeed try __array_ufunc__! It should make many things work much
better and possibly faster than was possible with __array_prepare__
and __array_wrap__ (for astropy's Quantity, an ndarray subclass than I
maintain, it gets us a factor of almost 2 in speed for operations
where scaling for
Hi Marc,
ufuncs are quite tricky to compile. Part of your problem is that, I
think, you started a bit too high up: `divmod` is also a binary
operation, so that part you do not need at all. It may be an idea to
start instead with a PR that implemented a new ufunc, e.g.,
https://github.com/numpy/num
Hi All,
First, it will be great to have more people developing! On avoiding
potential conflicts: I'm not overly worried, in part because of my
experience with astropy (for which NASA support developers at STScI
and CXC). One possible solution for trying to avoid them would be to
adapt the typical
Hi Matthew,
> it seems to me that we could get 80% of the way to a reassuring blueprint
> with a relatively small amount of effort.
My sentence "adapt the typical academic rule for conflicts of
interests to PRs, that non-trivial ones cannot be merged by someone
who has a conflict of interest wit
Hi Chuck,
Like Sebastian, I wonder a little about what level you are talking
about. Presumably, it is the actual implementation of the ufunc? I.e.,
this is not about the upper logic that decides which `__array_ufunc__`
to call, etc.
If so, I agree with you that it would seem to make most sense to
My two ¢: keep things as they are. There is just two much code that
uses the C definition of bools, 0=False, 1=True. Coupled with casting
every outcome that is unequal to 0 as True, * as AND, + as OR, and -
as XOR makes sense (and -True would indeed be True, but I'm quite
happy to have that one rem
About visibility of deprecations: this is *very* tricky - if we make
it more visible, every user is going to see deprecation warnings all
the time, about things they can do nothing about, because they occur
inside other packages. I think in the end the only choice is to have
automated testing that
To add to Allan's message: point (2), the printing of 0-d arrays, is
the one that is the most important in the sense that it rectifies a
really strange situation, where the printing cannot be logically
controlled by the same mechanism that controls >=1-d arrays (see PR).
While point 3 can also be
I'm not sure there is *that* much against a class that basically just
passes through views of itself inside `__matmul__` and `__rmatmul__`
or calls new gufuncs, but I think the lower hurdle is to first get
those gufuncs implemented.
-- Marten
___
NumPy-Di
Hi All,
I doubt I'm really the last one thinking ndarray subclassing is a good
idea, but as that was stated, I feel I should at least pipe in. It
seems to me there is both a perceived problem -- with the two
subclasses that numpy provides -- `matrix` and `MaskedArray` -- both
being problematic in
Hi Ryan,
Indeed, the liberal use of `np.asarray` is one of the main reason the
helper routines are relatively annoying. Of course, that is not an
argument for using duck-types over subclasses: those wouldn't even
survive `asanyarray` (which many numpy routines now have moved to).
All the best,
M
Hi Peter,
In the context of the discussion here, the fact that Quantity is a
subclass and not a duck-type array makes no difference for scipy code
- in either case, the code would eat the unit (if it would work at
all). My only argument was that sub-classing is not particularly worse
than trying t
Agreed with Eric Wieser here have an empty array test as `False` is
less than useless, since a non-empty array either returns something
based on its contents or an error. This means that one cannot write
statements like `if array:`. Does this leave any use case? It seems to
me it just shows there i
Hi Nathaniel,
Thanks for the link. The plans sounds great! You'll not be surprised
to hear I'm particularly interested in the units aspect (and, no, I
don't mind at all if we can stop subclassing ndarray...). Is the idea
that there will be a general way for allow a dtype to define how to
convert a
That sounds somewhat puzzling as units cannot really propagate without
them somehow telling how they would change! (e.g., the outcome of
sin(a) is possible only for angular units and then depends on that
unit). But in any case, the mailing list is probably not the best case
to discuss this - rather
Hi Nathaniel,
That sounds like it could work very well indeed!
Somewhat related only, for the inner loops I've been thinking whether
it might be possible to automatically create composite ufuncs, where
the inner loops are executed in some prescribed order, so that for
instance one could define
``
Just to second Stephan's comment: do try it! I've moved astropy's
Quantity over to it, and am certainly counting on the basic interface
staying put... -- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman
One way would be
```
px, py, pz, w, x, y, z = [arr[mask] for arr in px, py, pz, w, x, y, z]
```
-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
st ika and one can the whole complexity...
All the best,
Marten
On Fri, Oct 27, 2017 at 3:24 PM, Peter Creasey
wrote:
>> Date: Thu, 26 Oct 2017 17:27:33 -0400
>> From: Marten van Kerkwijk
>>
>> That sounds somewhat puzzling as units cannot really propagate without
>
Hi Nathan,
Happy to hear that it works well for yt! In astropy's Quantity as
well, it greatly simplifies the code, and has made many operations
about two times faster (which is why I pushed so hard to get
__array_ufunc__ done...). But for now we're stuck with supporting
__array_prepare__ and __ar
>From my experience with Quantity, routines that properly ducktype work
well, those that feel the need to accept list and blatantly do
`asarray` do not - even if in many cases they would have worked if
they used `asanyarray`... But there are lots of nice surprises, with,
e.g., `np.fft.fftfreq` jus
Hi Josef,
astropy's Quantity is well developed and would give similar results to
pint; all those results make sense if one wants to have consistent
units. A general library code will actually do the right thing as long
as it just uses normal mathematical operations with ufuncs - and as
long as it
Hi Josef,
Indeed, for some applications one would like to have different units
for different parts of an array. And that means that, at present, the
quantity implementations that we have are no good at storing, say, a
covariance matrix involving parameters with different units, where
thus each ele
My 2¢ here is that all code should feel free to assume certain type of
input, as long as it is documented properly, but there is no reason to
enforce that by, e.g., putting `asarray` everywhere. Then, for some
pieces ducktypes and subclasses will just work like magic, and uses
you might never have
On Thu, Nov 2, 2017 at 5:09 PM, Benjamin Root wrote:
> Duck typing is great and all for classes that implement some or all of the
> ndarray interface but remember what the main reason for asarray() and
> asanyarray(): to automatically promote lists and tuples and other
> "array-likes" to ndarr
I guess my argument boils down to it being better to state that a
function only accepts arrays and happily let it break on, e.g.,
matrix, than use `asarray` to make a matrix into an array even though
it really isn't.
I do like the dtype ideas, but think I'd agree they're likely to come
with their
Yes, I like the idea of, effectively, creating an ABC for ndarray -
with which one can register. -- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
Hi Ben,
You just summarized excellently why I'm on a quest to change `asarray`
to `asanyarray` within numy (or at least add a `subok` keyword for
things like `broadcast_arrays`)! Obviously, this covers only ndarray
subclasses, not duck types, though I guess in principle one could use
the ABC regis
Hi Benjamin,
For the shapes and reshaping, I wrote an ShapedLikeNDArray mixin/ABC
for astropy, which may be a useful starting point as it also provides
a way to implement the methods ndarray uses to reshape and get
elements: see
https://github.com/astropy/astropy/blob/master/astropy/utils/misc.py
Hi Nathaniel,
You're right, I shouldn't be righteous. Though I do think the
advantage of `asanyarray` inside numpy is remains that it is easy for
a user to add `asarray` to their input to a numpy function, and not
easy for a happily compatible subclass to avoid an `asarray` inside a
numpy function
In astropy we had a similar discussion about version numbers, and
decided to make 2.0 the LTS that still supports python 2.7 and 3.0 the
first that does not. If we're discussing jumping a major number, we
could do the same for numpy. (Admittedly, it made a bit more sense
with the numbering scheme
Hi Stephan,
A question of perhaps broader scope than what you were asking for, and
more out of curiosity than anything else, but can one mix type
annotations with others? E.g., in astropy, we have a decorator that
looks for units in the annotations (not dissimilar from dtype, I
guess). Could one m
Hi All,
I wondered if the move to python3-only starting with numpy 1.17 would
be a good reason to act on what we all seem to agree: that the matrix
class was a bad idea, with its overriding of multiplication and lack
of support for stacks of matrices. For 1.17, minimum python supposedly
is >=3.5,
Moving to a subpackage may indeed make more sense, though it might not
help as much with getting rid of the hacks inside other parts of numpy
to keep matrix working. In that respect it seems a bit different at
least from weave.
Then again, independently of whether we remove or release a separate
p
Hi Ralf,
Sorry not to have recalled the previous thread.
Your point about not doing things in the python 2->3 move makes sense;
handy for me is no reason to give users an incentive not to move to
python3 because their matrix-dependent code breaks.
It does sound like, given the use of sparse, a s
On Thu, Nov 30, 2017 at 2:51 PM, Stefan van der Walt
wrote:
> On Thu, Nov 30, 2017, at 10:13, josef.p...@gmail.com wrote:
>
> recarrays are another half-hearted feature in numpy that is mostly
> obsolete with pandas and pandas_like DataFrames in other
> packages.
>
>
> I'm fully on board with fact
Unlike for matrix, it is not so much a problem as an unclear use case
- the main thing they bring to structured dtype arrays is access by
attribute, which is slower than just doing getting the field by its
key.
Anyway, I don't think anybody is suggesting to remove them - they're
not a problem in th
Hi Nathaniel,
Thanks for the concrete suggestion: see
https://github.com/numpy/numpy/pull/10142
I think this is useful independent of exactly how the eventual move to
a new package would work; next step might be to collect all matrix
tests in the `libmatrix` sub-module.
All the best,
Marten
Hi Chris,
I'm easily convinced - yes, your argument makes sense too.
Fortunately, at some level it doesn't affect what we do now. For 1.15
it should at least have a PendingDeprecationWarning. Since putting
that in place means that all tests involving matrices now fail by
default, it also becomes
Would be great to have structure, and especially a template - ideally,
the latter is enough for someone to create a NEP, i.e., has lots of
in-template documentation.
One thing I'd recommend thinking a little about is to what extend a
NEP is "frozen" after acceptance. In astropy we've seen situatio
The real magic happens when you ducktype, and ensure your function
works both for arrays and scalars on its own. This is more often
possible than you might think! If you really need, e.g., the shape,
you can do `getattr(input, 'shape', ())` and things may well work for
scalars, and also for objects
Definitely a big! The underlying problem is:
```
In [23]: np.abs(np.int16(-32768))
Out[23]: -32768
```
This is not great, but perhaps consistent with the logic that abs
should return a value of the same dtype.
It could be solved inside `masked_values` by using `np.abs(value,
dtype=xnew.dtype)`
Do
Doing CI on a different architecture, especially big-endian, would
seem very useful. (Indeed, I'll look into it for my own project, of
reading radio baseband data -- we run it on a BGQ, so more constant
checking would be good to have).
But it may be that we're a bit too big for CI, and that it is
Hi Allan,
I think on the consistency argument is perhaps the most important:
views are very powerful and in many ways one *counts* on them
happening, especially in working with large arrays. They really should
be used everywhere it is possible. In this respect, I think one has to
weigh breakage of
On Thu, Jan 25, 2018 at 1:16 PM, Stefan van der Walt
wrote:
> On Mon, 22 Jan 2018 10:11:08 -0500, Marten van Kerkwijk wrote:
>>
>> I think on the consistency argument is perhaps the most important:
>> views are very powerful and in many ways one *counts* on them
>&g
Hi Nathaniel,
Overall, hugely in favour! For detailed comments, it would be good to
have a link to a PR; could you put that up?
A larger comment: you state that you think `np.asanyarray` is a
mistake since `np.matrix` and `np.ma.MaskedArray` would pass through
and that those do not strictly mimi
On Thu, Mar 8, 2018 at 4:52 AM, Gregor Thalhammer
wrote:
>
> Hi,
>
> long time ago I wrote a wrapper to to use optimised and parallelized math
> functions from Intels vector math library
> geggo/uvml: Provide vectorized math function (MKL) for numpy
>
> I found it useful to inject (some of) the fa
I think part of the problem is that ufuncs actually have two parts: a
generic interface, which turns all its arguments into ndarray (or
calls `__array_ufunc__`) and an ndarray-specific implementation of the
given function (partially, just the iterator, partially the inner
loop). The latter could lo
Hi Chuck,
Astropy tests indeed all pass again against master, without the
work-arounds for 1.14.1.
Thanks, of course also to Allan for the fix,
Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo
Hi Nathaniel,
astropy is an example of a project that does essentially all
discussion of its "Astropy Proposals for Enhancement" on github. I
actually like the numpy approach of sending anything to the mailing
list that deserves community input (which includes NEP by their very
nature). I don't th
We may be getting a bit distracted by the naming -- though I'll throw
out `asarraymimic` as another non-programmer-lingo option that doesn't
reuse `arraylike` and might describe what the duck array is attempting
to do more closely.
But more to the point: I think in essence, we're trying to create
I think we don't have to make it sounds like there are *that* many types
of compatibility: really there is just array organisation
(indexing/reshaping) and array arithmetic. These correspond roughly to
ShapedLikeNDArray in astropy and NDArrayOperatorMixin in numpy (missing so
far is concatenation)
Hi Nathanial,
I looked through the revised text at https://github.com/numpy/numpy/pull/10704
and think it covers things well; any improvements on the organisation
I can think of would seem to start with doing the merge anyway (e.g.,
I quite like Eric Wieser's suggested base ndarray class; the
addi
I think this indeed makes most sense for scipy. I possible, write it
as a `gufunc`, so duck arrays can override with `__array_ufunc__` if
necessary. -- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/l
On Wed, Mar 14, 2018 at 9:44 AM, Hameer Abbasi
wrote:
> I possible, write it as a `gufunc`, so duck arrays can override with
> `__array_ufunc__` if
>
> necessary. -- Marten
>
> Softmax is a very simple combination of elementary `ufunc`s with two inputs,
> the weight vector `w` and the data `x`. Wr
Apparently, where and how to discuss enhancement proposals was
recently a topic on the python mailing list as well -- see the
write-up at LWN:
https://lwn.net/SubscriberLink/749200/4343911ee71e35cf/
The conclusion seems to be that one should switch to mailman3...
-- Marten
_
Yes, a tuple of types would make more sense, given `isinstance` --
string abbreviations for those could be there for convenience.
-- Marten
On Sat, Mar 17, 2018 at 8:25 PM, Eric Wieser
wrote:
> I would have thought that a simple tuple of types would be more appropriate
> than using integer flags
For a lot more discussion, and a possible solution, see
https://github.com/numpy/numpy/pull/8528
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
Hi Matti,
This sounds great. For completeness, you omitted the vector-vector
case for matmul '(k),(k)->()' - but the suggested new signature for
`matmul` would cover that case as well, so not a problem.
All the best,
Marten
___
NumPy-Discussion mailing
Hi All,
When introducing the ``axes`` argument for generalized ufuncs, the
plan was to eventually also add ``axis`` and ``keepdims`` for
reduction-like gufuncs. I have now attempted to do so in
https://github.com/numpy/numpy/pull/11018
It is not completely feature-compatible with reductions in th
I thought a bit further about this proposal: a disadvantage for matmul
specifically is that is does not solve the need for `matvec`,
`vecmat`, and `vecvec` gufuncs. That said, it might make sense to
implement those as "pseudo-ufuncs" that just add a 1 in the right
place and call `matmul`...
-- Mart
Just for completeness: there are *four* gufuncs (matmat, matvec,
vecmat, and vecvec).
I remain torn about the best way forward. The main argument against
using them inside matmul is that in order to decide which of the four
to use, matmul has to have access to the `shape` of the arguments.
This me
On Wed, May 2, 2018 at 6:24 AM, Hameer Abbasi wrote:
> There is always the option of any downstream object overriding matmul, and I
> fail to see which objects won't have a shape. - Hameer
I think we should not decide too readily on what is "reasonable" to
expect for a ufunc input. For instance,
Hi Matti,
In the original implementation of what was then __numpy_ufunc__, we
had overrides for both `np.dot` and `np.matmul` that worked exactly as
your option (2), but we decided in the end that those really are not
true ufuncs and we should not include ufunc mimics in the mix as
someone using `
It is actually a bit more subtle (annoyingly so), the reason you get a
float64 is that you pass in a scalar, and for scalars, the dtype of
`pi` indeed "wins", as there is little reason to possibly loose
precision. If you pass in an array instead, then you do get
`float32`:
```
np.sinc(np.array([1.
I'm greatly in favour, especially if the same can be done for
`zeros_like` and `empty_like`, but note that a tricky part is that
ufuncs do not deal very graciously with structured (void) and string
dtypes. -- Marten
___
NumPy-Discussion mailing list
NumPy
Just for completeness: this is *not* an issue for ndarray subclasses,
but only for people attempting to write duck arrays. One might want
to start by mimicking `empty_like` - not too different from
`np.positive(a, where=False)`. Will note that that is 50 times slower
for small arrays since it actu
Hi All,
I agree with comments above that deprecating/removing MaskedArray is
premature; we certainly depend on it in astropy (which is indeed what
got me started to contribute to numpy -- it was quite buggy!).
I also think that, unlike Matrix, it is far from a neglected part of
numpy. Eric Wiese
Hi All,
Following on a PR combining the ability to provide fixed and flexible
dimensions [1] (useful for, e.g., 3-vector input with a signature like
`(3),(3)->(3)`, and for `matmul`, resp.; based on earlier PRs by Jaime
[2] and Matt (Picus) [3]), I've now made a PR with a further
enhancement, whic
Hi Stephan, Matt,
My `n|1` was indeed meant to be read as `n or 1`, but with the
(implicit) understanding that any array can have as many ones
pre-pended as needed.
The signature `(n?),(n?)->()` is now set aside for flexible
dimensions: this would allow the constant, but not the trailing shape
of
Hi Nathaniel,
I think the case for frozen dimensions is much more solid that just
`cross1d` - there are many operations that work on size-3 vectors.
Indeed, as I noted in the PR, I have just been wrapping a
Standards-of-Astronomy library in gufuncs, and many of its functions
require size-3 vectors
> Incidentally, once we make reduce/accumuate/... into "associated gufuncs", I
> propose completely removing the "method" argument of __array_ufunc__, since
> it is no longer needed and adds a lot
> of complexity which implementors of an __array_ufunc__ are forced to
> account for.
For Quantity a
Hi Sebastian,
This is getting a bit far off-topic (which is whether it is a good
idea to allow the ability to set frozen dimensions and broadcasting),
but on `all_equal`, I definitely see the point that a method might be
better, but that needs work: to expand the normal ufunc mechanism to
allow th
Hi Allan,
Seeing it written out like that, I quite like the multiple dispatch
signature: perhaps verbose, but clear.
It does mean a different way of changing the ufunc structure, but I
actually think it has the advantage of being possible without extending the
structure (though that might still b
1 - 100 of 262 matches
Mail list logo