Hi Abel,
As long as your x,y,z are next to each other, you can transform from
your structure to an unstructured array via a view, which has very
little cost. Though you need to be a bit careful with offsets, etc., if
there are also other elements in the structured dtype.
Example, with some extra
As one of the ones who argued (perhaps too) vociferously previously that .T
transposing all axes was a mistake and that it should just be the last
two, yes, let's deprecate for all but 2-d arrays (i.e., also warning for
0 and 1). Ideally, eventually (numpy 3.0?) it can replace `.mT`, but
also fine
Dear Shashang,
Like the others, I fear that this super-broadcasting would be confusing.
In the end, if you have a.shape = (2, 3) and b.shape = (4, 3), it is
unclear why element 0 of a should broadcast against 0 and 1 of B rather
than 0 and 2. And numpy should refuse the temptation to guess!
I th
I forgot to add links to previous discussions.
Github issue: https://github.com/numpy/numpy/issues/834
2011 thread:
https://mail.python.org/archives/list/numpy-discussion@python.org/thread/DX5KVE5O36MQHIEBOFK6YRH2JPRMFPVB/#I5MKK4ZPX3FA6K6H5457F4WOHYSO67NN
2016 thread #1:
https://mail.python.org/
nd that
> the `[1, 3, 5]` correspond to start indices and `[2, -1, 0]`
> correspond to stop indices. Perhaps we should require kwarg use
> instead of positional to make the code more readable.
> Matti
>
> On Sun, Nov 24, 2024 at 3:13 AM Marten van Kerkwijk
> wrote:
>>
&
Hi All,
This discussion about updating reduceat went silent, but recently I came
back to my PR to allow `indices` to be a 2-dimensional array of start
and stop values (or a tuple of separate start and stop arrays). I
thought a bit more about it and think it is the easiest way to extend
the presen
Hi Dan, others,
Great news that the sparse array implementation is getting there!
The continued existence of np.matrix has in large part been because of
sparse matrices, so in some sense the decision depends also on what
happens to those.
But generally I'm in favour of just deprecating and removi
Hi All,
When the repr of an array is shown, currently the dtype and shape are
explicitly listed if these cannot be directly inferred from the list
that is shown, i.e., if the dtype is not float64 or int64, and if the
size of the array is zero, but the shape not the simple (0,).
For instance,
```
Probably good to think first what the inverse function, np.frexp, should
do for complex numbers. I guess the choices are:
1. Remove the largest exponent of real/imaginary, and give a complex
mantissa, in which only one of the real or imaginary components is
guaranteed to have its absolute va
Hi Olivier,
Thanks for bringing this up here! As stated by Chuck in the issue
leading to the PR,
https://github.com/numpy/numpy/issues/26401
there are real risks with using natural coefficients for polynomials:
we really want to move people away from situations in which large
cancellation error
I think a NEP is a good idea. It would also seem to make sense to
consider how the dtype itself can hold/calculate this type of
information, since that will be the only way a generic ``info()``
function can get information for a user-defined dtype. Indeed, taking
that further, might a method or p
Hi All,
I agree with Dan that the actual contributions to the documentation are
of little value: it is not easy to write good documentation, with
examples that show not just the mechnanics but the purpose of the
function, i.e., go well beyond just showing some random inputs and
outputs. And poorl
> Also, can’t get __array_wrap__ to work. The arguments it receives after
> __iadd__ are all
> post-operation. Decided not to do it this way this time so not to hardcode
> such functionality
> into the class, but if there is a way to robustly achieve this it would be
> good to know.
It is non-t
> One more thing to mention on this topic.
>
> From a certain size dot product becomes faster than sum (due to
> parallelisation I guess?).
>
> E.g.
> def dotsum(arr):
> a = arr.reshape(1000, 100)
> return a.dot(np.ones(100)).sum()
>
> a = np.ones(10)
>
> In [45]: %timeit np.add.reduce
> From my experience, calling methods is generally faster than
> functions. I figure it is due to having less overhead figuring out the
> input. Maybe it is not significant for large data, but it does make a
> difference even when working for medium sized arrays - say float size
> 5000.
>
> %timei
> What were your conclusions after experimenting with chained ufuncs?
>
> If the speed is comparable to numexpr, wouldn’t it be `nicer` to have
> non-string input format?
>
> It would feel a bit less like a black-box.
I haven't gotten further than it yet, it is just some toying around I've
been
Hi Oyibo,
> I'm proposing the introduction of a `pipe` method for NumPy arrays to enhance
> their usability and expressiveness.
I think it is an interesting idea, but agree with Robert that it is
unlikely to fly on its own. Part of the logic of even frowning on
methods like .mean() and .sum() i
> For my own work, I required the intersect1d function to work on multiple
> arrays while returning the indices (using `return_indizes=True`).
> Consequently I changed the function in numpy and now I am seeking
> feedback from the community.
>
> This is the corresponding PR: https://github.com/n
> I tend to agree with not using the complex conjugate for vecmat, but would
> prefer having
> separate functions for that that make it explicit in the name. I also note
> that mathematicians
> use sesquilinear forms, which have the vector conjugate on the other side, so
> there are
> different
Hi Alan,
The problem with .dot is not that it is not possible, but more that it
is not obvious exactly what will happen given the overloading of
multiple use cases; indeed, this is why `np.matmul` was created. For
the stacks of vectors case in particular, it is surprising that the
vector dimensio
> Could you please offer some code or math notation to help communicate this?
> I am forced to guess at the need.
>
> The words "matrix" and "vector" are ambiguous.
> After all, matrices (of given shape) are a type of vector (i.e., can be added
> and scaled.)
> So if by "matrix" you mean "2d array
> FWIW, +1 for matvec & vecmat to complement matmat (erm, matmul). Having a
> binop where one argument is a matrix and the other is a
> stack/batch of vectors is indeed awkward otherwise, and a dedicated function
> to clearly distinguish "two matrices" from "a matrix and a
> batch of vectors" sou
> Why do these belong in NumPy? What is the broad field of application of these
> functions? And,
> does a more general concept underpin them?
Multiplication of a matrix with a vector is about as common as matrix
with matrix or vector with vector, and not currently easy to do for
stacks of vector
> I can understand the desire to generalise the idea of matrix
> multiplication for when the arrays are not both 2-D but taking the
> complex conjugate makes absolutely no sense in the context of matrix
> multiplication.
>
> You note above that "vecmat is defined as x†A" but my interpretation
> of
>> I also note that for complex numbers, `vecmat` is defined as `x†A`,
>> i.e., the complex conjugate of the vector is taken. This seems to be the
>> standard and is what we used for `vecdot` too (`x†x`). However, it is
>> *not* what `matmul` does for vector-matrix or indeed vector-vector
>> produc
> For dot product I can convince myself this is a math definition thing and
> accept the
> conjugation. But for "vecmat" why the complex conjugate of the vector? Are we
> assuming that
> 1D things are always columns. I am also a bit lost on the difference of dot,
> vdot and vecdot.
>
> Also if _
Hi All,
I have a PR [1] that adds `np.matvec` and `np.vecmat` gufuncs for
matrix-vector and vector-matrix calculations, to add to plain
matrix-matrix multiplication with `np.matmul` and the inner vector
product with `np.vecdot`. They call BLAS where possible for speed.
I'd like to hear whether th
Hi All,
Thanks for the comments on complex sign - it seems there is good support
for it.
On copysign, currently it is not supported for complex values at all. I
think given the responses so far, it looks like we should just keep it
like that; although my extension was fairly logical, I cannot sa
Hi Sebastian,
> That looks nice, I don't have a clear feeling on the order of items, if
> we think of it in terms of `(start, stop)` there was also the idea
> voiced to simply add another name in which case you would allow start
> and stop to be separate arrays.
Yes, one could add another method.
Hi Martin,
I agree it is a long-standing issue, and I was reminded of it by your
comment. I have a draft PR at https://github.com/numpy/numpy/pull/25476
that does not change the old behaviour, but allows you to pass in a
start-stop array which behaves more sensibly (exact API TBD).
Please have a
Hi Ralf,
I realize you feel strongly that this whole thread is rehashing history,
but I think it is worth pointing out that many seem to consider that the
criterion for allowing backward incompatible changes, i.e., that "existing
code is buggy or is consistently confusing many users", is actually
> The main motivation for the @ PEP was actually to be able to get rid of
> objects like np.matrix and scipy.sparse matrices that redefine the meaning
> of the * operator. Quote: "This PEP proposes the minimum effective change
> to Python syntax that will allow us to drain this swamp [meaning np.ma
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers wrote:
>
>
> On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>>
>> For the names, my suggestion of lower-casing the M in the initial one,
>> i.e., `.mT`
Hi Kirill, others,
Indeed, it is becoming long! That said, while initially I was quite charmed
by Eric's suggestion of deprecating and then changing `.T`, I think the
well-argued opposition to it has changed my opinion. Perhaps most
persuasive to me was Matthew's point just now that code (or a cod
Hi Juan,
On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias
wrote:
> On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
>
> Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape
> of `(n, 1)` the right behaviour? I.e., it should still change from
Hi All,
The examples with different notation brought back memory of another
solution: define
`m.ᵀ` and m.ᴴ`. This is possible, since python3 allows any unicode for
names, nicely readable, but admittedly a bit annoying to enter (in emacs,
set-input-method to TeX and then ^T, ^H).
More seriously, s
On Mon, Jun 24, 2019 at 7:21 PM Stephan Hoyer wrote:
> On Mon, Jun 24, 2019 at 3:56 PM Allan Haldane
> wrote:
>
>> I'm not at all set on that behavior and we can do something else. For
>> now, I chose this way since it seemed to best match the "IGNORE" mask
>> behavior.
>>
>> The behavior you de
Hi Allan,
> The alternative solution in my model would be to replace `np.dot` with a
> > masked-specific implementation of what `np.dot` is supposed to stand for
> > (in your simple example, `np.add.reduce(np.multiply(m, m))` - more
> > generally, add relevant `outer` and `axes`). This would be si
Hi Eric,
The easiest definitely is for the mask to just propagate, which that even
if just one point is masked, all points in the fft will be masked.
On the direct point I made, I think it is correct that since one can think
of the Fourier transform of a sine/cosine fit, then there is a solution
Hi Allan,
Thanks for bringing up the noclobber explicitly (and Stephan for asking for
clarification; I was similarly confused).
It does clarify the difference in mental picture. In mine, the operation
would indeed be guaranteed to be done on the underlying data, without copy
and without `.filled(
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I
don't quite see why we would wait for that with introducing the `.H`
property.
I do agree that `.H` is the correct name, giving most immediate clarity
(i.e., people who know what conjugate transpose is, will recogn
Dear Hameer, Ilhan,
Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape
of `(n, 1)` the right behaviour? I.e., it should still change from what it
is now - which is to leave the shape at `(n,)`.
Your argument about `dot` and `matmul` having similar behaviour certainly
adds w
Hi Eric,
On your other points:
I remain unconvinced that Mask classes should behave differently on
> different ufuncs. I don’t think np.minimum(ignore_na, b) is any different
> to np.add(ignore_na, b) - either both should produce b, or both should
> produce ignore_na. I would lean towards produxi
Hi Stephan,
Eric perhaps explained my concept better than I could!
I do agree that, as written, your example would be clearer, but Allan's
code and the current MaskedArray code do have not that much semblance to
it, and mine even less, as they deal with operators as whole groups.
For mine, it ma
I had not looked at any implementation (only remembered the nice idea of
"importing from the future"), and looking at the links Eric shared, it
seems that the only way this would work is, effectively, pre-compilation
doing a `.replace('.T', '._T_from_the_future')`, where you'd be
hoping that there
Hi All,
I'd love to have `.T` mean the right thing, and am happy that people are
suggesting it after I told Steward this was likely off-limits (which, in
fairness, did seem to be the conclusion when we visited this before...).
But is there something we can do to make it possible to use it already
Hi Stephan,
In slightly changed order:
Let me try to make the API issue more concrete. Suppose we have a
> MaskedArray with values [1, 2, NA]. How do I get:
> 1. The sum ignoring masked values, i.e., 3.
> 2. The sum that is tainted by masked values, i.e., NA.
>
> Here's how this works with existi
Hi Tom,
I think a sensible alternative mental model for the MaskedArray class is
>> that all it does is forward any operations to the data it holds and
>> separately propagate a mask,
>>
>
> I'm generally on-board with that mental picture, and agree that the
> use-case described by Ben (different
> I think a sensible alternative mental model for the MaskedArray class is
>> that all it does is forward any operations to the data it holds and
>> separately propagate a mask, ORing elements together for binary operations,
>> etc., and explicitly skipping masked elements in reductions (ideally us
ts of copies, and forces copies to
> be explicit in user code. 2. disallowing direct modification of the mask
> lowers the "API surface area" making people's MaskedArray code less
> buggy and easier to read: Exposure of nonsense values by "unmasking" is
> one less p
purpose?
Anyway, it would seem easily at the point where I should comment on your
repository rather than in the mailing list!
All the best,
Marten
On Wed, Jun 19, 2019 at 5:45 PM Allan Haldane
wrote:
> On 6/18/19 2:04 PM, Marten van Kerkwijk wrote:
> >
> >
> > On Tue, Jun
On Tue, Jun 18, 2019 at 12:55 PM Allan Haldane
wrote:
> > This may be too much to ask from the initializer, but, if so, it still
> > seems most useful if it is made as easy as possible to do, say, `class
> > MaskedQuantity(Masked, Quantity): `.
>
> Currently MaskedArray does not accept ducktypes
Hi Allen,
Thanks for the message and link! In astropy, we've been struggling with
masking a lot, and one of the main conclusions I have reached is that
ideally one has a more abstract `Masked` class that can take any type of
data (including `ndarray`, of course), and behaves like that data as much
rward.
All the best,
Marten
p.s. And, yes, `__array_function__` is quite wonderful!
On Fri, Jun 14, 2019 at 3:46 AM Ralf Gommers wrote:
>
>
> On Fri, Jun 14, 2019 at 2:21 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> Hi Ralf,
>>
>>
Hi Ralf,
Thanks both for the reply and sharing the link. I recognize much (from both
sides!).
>
> More importantly, I think we should not even consider *discussing*
> removing` __array_function__` from np.isposinf (or any similar one off
> situation) before there's a new bigger picture design.
Hi Ralf,
>> I guess the one immediate question is whether `np.sum` and the like
>> should be overridden by `__array_function__` at all, given that what should
>> be the future recommended override already works.
>>
>
> I'm not sure I understand the rationale for this. Design consistency
> matters
On Thu, Jun 13, 2019 at 12:46 PM Stephan Hoyer wrote:
>
>
>> But how about `np.sum` itself? Right now, it is overridden by
>> __array_function__ but classes without __array_function__ support can also
>> override it through the method lookup and through __array_ufunc__.
>>
>> Would/should there
Hi Ralf, others,
>> Anyway, I guess this is still a good example to consider for how we
>> should go about getting to a new implementation, ideally with just a
>> single-way to override?
>>
>> Indeed, how do we actually envisage deprecating the use of
>> `__array_function__` for a given part of
AM Stephan Hoyer wrote:
>
>> On Wed, Jun 12, 2019 at 5:55 PM Marten van Kerkwijk <
>> m.h.vankerkw...@gmail.com> wrote:
>>
>>> Hi Ralf,
>>>
>>> You're right, the problem is with the added keyword argument (which
>>> would appear also
The attrs like you sent definitely sounded like it would translate to numpy
nearly trivially. I'm very much in favour!
-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
un 12, 2019 at 4:32 PM Ralf Gommers wrote:
>
>
> On Wed, Jun 12, 2019 at 12:02 AM Stefan van der Walt
> wrote:
>
>> On Tue, 11 Jun 2019 15:10:16 -0400, Marten van Kerkwijk wrote:
>> > In a way, I brought it up mostly as a concrete example of an internal
>> >
Overall, in favour of splitting the large files, but I don't like that the
notes stop being under version control (e.g., a follow-up PR slightly
changes things, how does the note gets edited/reverted?).
Has there been any discussion of having, e.g., a directory
`docs/1.17.0-notes/`, and everyone s
HI Sebastian,
Thanks for the overview! In the value-based casting, what perhaps surprises
me most is that it is done within a kind; it would seem an improvement to
check whether a given integer scalar is exactly representable in a given
float (your example of 1024 in `float16`). If we switch to th
> In a way, I brought it up mostly as a concrete example of an internal
> implementation which we cannot change to an objectively cleaner one because
> other packages rely on an out-of-date numpy API.
>
> Should have added: rely on an out-of-date numpy API where we have multiple
ways for packages t
n, Jun 10, 2019 at 7:47 PM Marten van Kerkwijk <
> > m.h.vankerkw...@gmail.com> wrote:
> > > Hi All,
> > >
> > > In https://github.com/numpy/numpy/pull/12801, Tyler has been trying
> > > to use the new `where` argument for reductions to implement
&
Hi All,
In https://github.com/numpy/numpy/pull/12801, Tyler has been trying to use
the new `where` argument for reductions to implement `nansum`, etc., using
simplifications that boil down to `np.sum(..., where=~isnan(...))`.
A problem that occurs is that `np.sum` will use a `.sum` method if that
On Fri, Jun 7, 2019 at 1:19 AM Ralf Gommers wrote:
>
>
> On Fri, Jun 7, 2019 at 1:37 AM Nathaniel Smith wrote:
>
>>
>> My intuition is that what users actually want is for *native Python
>> types* to be treated as having 'underspecified' dtypes, e.g. int is
>> happy to coerce to int8/int32/int64
Hi Sebastian,
Tricky! It seems a balance between unexpected memory blow-up and unexpected
wrapping (the latter mostly for integers).
Some comments specifically on your message first, then some more general
related ones.
1. I'm very much against letting `a + b` do anything else than `np.add(a,
b)
Hi Stefan,
On Mon, Jun 3, 2019 at 4:26 PM Stefan van der Walt
wrote:
> Hi Marten,
>
> On Sat, 01 Jun 2019 12:11:38 -0400, Marten van Kerkwijk wrote:
> > Third, we could actual implementing the logical groupings identified in
> the
> > code base (and describing them!).
On Sun, Jun 2, 2019 at 2:21 PM Eric Wieser
wrote:
> Some of your categories here sound like they might be suitable for ABCs
> that provide mixin methods, which is something I think Hameer suggested in
> the past. Perhaps it's worth re-exploring that avenue.
>
> Eric
>
>
Indeed, and of course for
> Our API is huge. A simple count:
> main namespace: 600
> fft: 30
> linalg: 30
> random: 60
> ndarray: 70
> lib: 20
> lib.npyio: 35
> etc. (many more ill-thought out but not clearly private submodules)
>
>
I would perhaps start with ndarray itself. Quite a lot seems superfluous
Shapes:
- need: sh
> > In this respect, I think an excellent place to start might be
>> > something you are planning already anyway: update the user
>> > documentation
>> >
>>
>> I would include tests as well. Rather than hammer out a full standard
>> based on extensive discussions and negotiations, I wou
Hi Ralf,
Despite sharing Nathaniel's doubts about the ease of defining the numpy API
and the likelihood of people actually sticking to a limited subset of what
numpy exposes, I quite like the actual things you propose to do!
But my liking it is for reasons that are different from your stated ones
Hi Sebastian, Stéfan,
Thanks for the very good summaries!
An additional item worth mentioning is that by using
`__skip_array_function__` everywhere inside, one minimizes the performance
penalty of checking for `__array_function__`. It would obviously be worth
trying to do that, but ideally in a w
I agree that we should not have two functions
I also am rather unsure whether a ufunc is a good idea. Earlier, while
discussing other possible additions, like `erf`, the conclusion seemed to
be that in numpy we should just cover whatever is in the C standard. This
suggests `sinc` should not be a u
On a more general note, if we change to a ufunc, it will get us stuck with
sinc being the normalized version, where the units of the input have to be
in the half-cycles preferred by signal-processing people rather than the
radians preferred by mathematicians.
In this respect, note that there is an
> Otherwise, there should
>>> be no change except additional features of ufuncs and the move to a C
>>> implementation.
>>>
>>
> I see this is one of the functions that uses asanyarray, so what about
> impact on subclass behavior?
>
So, subclasses are passed on, as they are in ufuncs. In general,
> If we want to keep an "off" switch we might want to add some sort of API
> for exposing whether NumPy is using __array_function__ or not. Maybe
> numpy.__experimental_array_function_enabled__ = True, so you can just test
> `hasattr(numpy, '__experimental_array_function_enabled__')`? This is
> ass
this before setting the API in stone.
>>
>> At scikit-image we place a very strong emphasis on code simplicity and
>> readability, so I also share Marten's concerns about code getting too
>> complex. My impression reading the NEP was "whoa, this is hard, I'm glad
>&g
Hi All,
For 1.17, there has been a big effort, especially by Stephan, to make
__array_function__ sufficiently usable that it can be exposed. I think this
is great, and still like the idea very much, but its impact on the numpy
code base has gotten so big in the most recent PR (gh-13585) that I won
On Sun, Apr 28, 2019 at 9:20 PM Stephan Hoyer wrote:
> On Sun, Apr 28, 2019 at 8:42 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> In summary, I think the guarantees should be as follows:
>> 1.If you call np.function and
>> - do not define _
, Apr 28, 2019 at 1:38 PM Marten van Kerkwijk
> wrote:
> >
> > Hi Nathaniel,
> >
> > I'm a bit confused why` np.concatenate([1, 2], [3, 4])` would be a
> problem. In the current model, all (numpy) functions fall back to
> `ndarray.__array_function__`, which does kno
Hi Nathaniel,
I'm a bit confused why` np.concatenate([1, 2], [3, 4])` would be a problem.
In the current model, all (numpy) functions fall back to
`ndarray.__array_function__`, which does know what to do with anything that
doesn't have `__array_function__`: it just coerces it to array. Am I
missin
Hi Ralf,
Agreed that the coercion right now is *not* generic, with some doing
`asarray`, others `asanyarray` and yet others nothing. There are multiple
possible solutions, with one indeed being that for each function one moves
the coercion bits out to an associated intermediate function. In princi
Hi Ralf,
Thanks for the comments and summary slides. I think you're
over-interpreting my wish to break people's code! I certainly believe - and
think we all agree - that we remain as committed as ever to ensure that
```
np.function(inputs)
```
continues to work just as before. My main comment is t
Hi All,
I agree with Ralf that there are two discussions going on, but also with
Hameer that they are related, in that part of the very purpose of
__array_function__ was to gain freedom to experiment with implementations.
And in particular the freedom to *assume* that inputs are arrays so that we
On Thu, Apr 25, 2019 at 6:04 PM Stephan Hoyer wrote:
> On Thu, Apr 25, 2019 at 12:46 PM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>
> It would be nice, though, if we could end up with also option 4 being
>> available, if only because code that just
It seems we are adding to the wishlist! I see four so far:
1. Exposed in API, can be overridden with __array_ufunc__
2. One that converts everything to ndarray (or subclass); essentially the
current implementation;
3. One that does asduckarray
4. One that assumes all arguments are arrays.
Maybe h
Hi All,
Reading the discussion again, I've gotten somewhat unsure that it is
helpful to formalize a way to call an implementation that we can and
hopefully will change. Why not just leave it at __wrapped__? I think the
name is no worse and it is more obvious that one relies on something
private.
Very much second Joe's recommendations - especially trying NASA - which has
an amazing track record of open data also in astronomy (and a history of
open source analysis tools, as well as the "Astrophysics Data System").
-- Marten
___
NumPy-Discussion mai
Hi Ralf,
I'm sorry to hear the proposal did not pass the first round, but, having
looked at it briefly (about as much time as I would have spent had I been
on the panel), I have to admit I am not surprised: it is nice but nice is
not enough for a competition like this.
Compared to what will have
I somewhat share Nathaniel's worry that by providing
`__numpy_implementation__` we essentially get stuck with the
implementations we have currently, rather than having the hoped-for freedom
to remove all the `np.asarray` coercion. In that respect, an advantage of
using `_wrapped` is that it is clea
It may be relevant at this point to mention that the padding bytes do *not*
get copied - so you get a blob with possibly quite a lot of uninitialized
data. If anything, that seems a recipe for unexpected results. Are there
non-contrived examples where you would *want* this uninitialized blob?
Fran
Hi All,
An issue [1] about the copying of arrays with structured dtype raised a
question about what the expected behaviour is: does copy always preserve
the dtype as is, or should it remove padding?
Specifically, consider an array with a structure with many fields, say 'a'
to 'z'. Since numpy 1.1
Certainly have done `np.random.normal(2*n).view('c16')` very often. Makes
sense to just allow it to be generated directly. -- Marten
On Sat, Mar 30, 2019 at 6:24 PM Hameer Abbasi
wrote:
> On Friday, Mar 29, 2019 at 6:03 PM, Hameer Abbasi <
> einstein.edi...@gmail.com> wrote:
>
> On Friday, Mar 2
Hi Frédéric,
The problem with any environment type variable is that when you disable the
dispatch functionality, all other classes that rely on being able to
override a numpy function stop working as well, i.e., the behaviour of
everything from dask to astropy's Quantity would depend on that setti
Fantastic!
-- Marten
On Wed, Feb 27, 2019 at 1:19 AM Stefan van der Walt
wrote:
> Hi everyone,
>
> The team at BIDS would like to take on an intern from Outreachy
> (https://www.outreachy.org), as part of our effort to grow the NumPy
> developer community.
>
> The internship is similar to a GSo
Since numpy generally does not expose parts as modules, I think a separate
namespace for the exceptions makes sense. I prefer `np.exceptions` over
`np.errors`.
It might still make sense for that namespace to import from the different
parts, i.e., also have `np.core.exceptions`, np.polynomial.excep
There is a long-standing request to require an explicit opt-in for
dtype=object: https://github.com/numpy/numpy/issues/5353
-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
Hi Juan,
I also use `broadcast_to` a lot, to save memory, but definitely have been
in a situation where in another piece of code the array is assumed to be
normal and writable (typically, that other piece was also written by me; so
much for awareness...). Fortunately, `broadcast_to` already return
1 - 100 of 262 matches
Mail list logo