neither of your three options feel very obvious to me unfortunately.
- Sebastian
> A one year time frame is pretty short on the context of a two decades
> old project and I believe the current council has too few people who
> have been around the community long enough to help unstuck d
On Mo, 2015-09-21 at 11:32 +0200, Sebastian Berg wrote:
> On So, 2015-09-20 at 11:20 -0700, Travis Oliphant wrote:
> > After long conversations at BIDS this weekend and after reading the
> > entire governance document, I realized that the steering council is
> > very large a
ng about ABI" it may sound like
"two years from now we will definitely break ABI", the curse being that
it does not matter that you and those who know numpy's well know that it
is just you stressing strongly that we should seriously discuss it. I
have to admit, you sometimes sound a
old time contributers who were not active in the past year(s).
I cannot form/change my opinion based on the previous discussion, because I
would like to get an idea of how everyone feels about these points first. Then
we can fight about details :)
- Sebastian
(sending from phone, so sorry about
icitely in governance. I am aware that
everyone wants to help, but right now I do not feel helped at all :).
- Sebastian
> Sorry if that means more work for you Fernando, because I know that you
> have become a very busy person, but I also know how much do you care about
> the NumPy
nk we should include some relation to historic contributions into the
document. I think what might be nice to do would be for example to also
"seed" the emeritus list (if it is not too difficult).
It is somewhat unrelated to governance, but since we want it to be a
prominent document, it
ight in to keep the current times but do not feel very strongly.
About using the commit log to seed, I think there are some old term
contributers (David Cournapeau maybe?), who never stopped doing quite a
bit but may not have merge commits. However, I think we can start of
with what we had, then I w
should then be ravelled, so this is indeed a bug.
>
> If I bisected correctly, the problematic change is this one:
>
Yeah, vdot uses `ravel`. That is correct, but it should only use it
after making sure the input is C-order, or make sure the output of ravel
is C-order (vdot was changed t
should then be ravelled, so this is indeed a bug.
>
> If I bisected correctly, the problematic change is this one:
>
Hmmm, unless we want to make sure that the output of ravel is always
contiguous (which would be a difference from `.reshape(-1)`.
It would be a bit safer, and not a usel
On Do, 2015-09-24 at 13:14 +0200, Sebastian Berg wrote:
> On Do, 2015-09-24 at 03:26 -0700, Stefan van der Walt wrote:
> > On 2015-09-24 00:17:33, Jens Jørgen Mortensen wrote:
> > > jensj@jordan:~$ python
> > > Python 2.7.9 (default, Apr 2 2015, 15:33:21)
> > &
On Do, 2015-09-24 at 10:45 +0200, Sebastian Berg wrote:
> On Do, 2015-09-24 at 00:22 -0700, Nathaniel Smith wrote:
> > On Wed, Sep 23, 2015 at 3:12 PM, Travis Oliphant
> > wrote:
> > >>
> > >> Here is a list of the current Contributors to the main NumPy
On Do, 2015-09-24 at 10:03 -0700, Nathaniel Smith wrote:
> On Sep 24, 2015 4:14 AM, "Sebastian Berg"
> wrote:
> >
> > On Do, 2015-09-24 at 03:26 -0700, Stefan van der Walt wrote:
> > > On 2015-09-24 00:17:33, Jens Jørgen Mortensen
> wrote:
> > &
On Mi, 2015-09-23 at 19:48 -0500, Travis Oliphant wrote:
>
>
> On Wed, Sep 23, 2015 at 6:19 PM, Charles R Harris
> wrote:
>
>
> On Wed, Sep 23, 2015 at 3:42 PM, Chris Barker
> wrote:
> On Wed, Sep 23, 2015 at 2:21 PM, Travis Oliphant
>
nt rules and do not like exceptions
much.
As I said, I will not get in the way of any consensus saying otherwise
though and I am sure there are many ways to change the current draft
that even I will like ;)!
- Sebastian
>
> I think this addresses most of the concerns, IBM is happy (eno
d some way to tell that numpy as a
PyDecimalDtype.
Now "object" would possibly be just a fallback to mean "figure out what
to use for each element". It would be a bit slower, but it would work
very generally, because numpy would not impose limits as such.
- Sebastian
> OTOH s
On Mi, 2015-09-30 at 00:01 -0700, Nathaniel Smith wrote:
> On Tue, Sep 29, 2015 at 2:07 PM, Sebastian Berg
> wrote:
> > On Di, 2015-09-29 at 11:16 -0700, Nathaniel Smith wrote:
> [...]
> >> In general I'm not a big fan of trying to do all kinds of guessing
> >&
n being
careful to implement all possible features ;).
It is not that I think we would not have consistent rules, etc. it is
just that we *want* to force code to be obvious. If someone has arrays
inside arrays, maybe he should be expected to specify that.
It actually breaks some logic (or cannot be i
y first guess would be, that it sounds like you got some old test files
flying around. Can you try cleaning up everything and reinstall? It can
happen that old installed test files survive the new version.
And most of all, thanks a lot Chuck!
- Sebastian
>
>
s change of the output order in advanced indexing in some
cases, it makes it faster sometimes, and probably slower in others, what
is right seems very much non-trivial.
- Sebastian
>
> >>> np.column_stack((np.ones(10), np.ones((10, 2), order='F'))).flags
> C_CONTIGUOUS : F
Congrats, both of you ;).
On Sun Nov 1 04:30:27 2015 GMT+0330, Jaime Fernández del Río wrote:
> "Gruetzi!", as I just found out we say in Switzerland...
> On Oct 30, 2015 8:20 AM, "Jonathan Helmus" wrote:
>
> > On 10/28/2015 09:43 PM, Allan Haldane wrote:
> > > On 10/28/2015 05:27 PM, Nathanie
I bet it has all been said already, but to note just in case. In numpy itself
we use it mostly to determine the memory order of the *output* and not for
safty purpose. That is the macro of course and I think yelling people to use
flags.fnc in python is better.
- Sebastian
On Mon Nov 2 08:52
t by broadcasting between `1` and `(1,)` analogous to indexing
the full array with usecols:
usecols=1 result:
array([2, 3, 4, 5])
usecols=(1,) result [1]:
array([[2, 3, 4, 5]])
since a scalar row (so just one row) is read and not a 2D array. I tend
to say it should be an array-like argument and n
On Di, 2015-11-10 at 10:24 +0100, Irvin Probst wrote:
> On 10/11/2015 09:19, Sebastian Berg wrote:
> > since a scalar row (so just one row) is read and not a 2D array. I tend
> > to say it should be an array-like argument and not a generalized
> > sequence argument, just wante
d only suggest that you also accept buffer interface
objects or array_interface stuff. Which in this case is really
unnecessary I think.
- Sebastian
>
> Ben Root
>
>
> On Tue, Nov 10, 2015 at 10:07 AM, Irvin Probst
> wrote:
> On 10/11/2015 14:17, Sebastian Berg
e very welcome, and if it is "I don't understand a
word" :). I know it is probably too short and, at least without
examples, not easy to understand.
Best,
Sebastian
=
against the flattening and even the array-like logic [1]
currently in the PR, it seems like arbitrary generality for my taste
without any obvious application.
As said before, the other/additional thing that might be very helpful is
trying to give a more useful error message.
- Sebastian
[1] Al
arr_r = arr.transpose((0, 1, 3, 2))
arr_r = arr_r.reshape((7, 24, -1))
- Sebastian
> [0,0,0,0] -> [0,0,0]
> [0,0,1,0] -> [0,0,1]
> [0,0,0,1] -> [0,0,2]
> [0,0,1,1] -> [0,0,3]
> ...
>
> What might be the simplest way to do this?
>
>
> A differ
On Di, 2015-11-17 at 13:49 -0500, Neal Becker wrote:
> Robert Kern wrote:
>
> > On Tue, Nov 17, 2015 at 3:48 PM, Neal Becker wrote:
> >>
> >> I have an array of shape
> >> (7, 24, 2, 1024)
> >>
> >> I'd like an array of
> >> (7, 24, 2048)
> >>
> >> such that the elements on the last dimension are
o have a look ;). The code is not the most read one
in NumPy, and it would not surprise me a lot if you can find something
to tweak.
- Sebastian
>
> If I instrument Numpy (setting NPY_IT_DBG_TRACING and such), I see
> that when the add() ufunc is called, 'n' is copied into a t
On Di, 2015-11-24 at 16:49 -0800, Eli Bendersky wrote:
>
>
> On Mon, Nov 23, 2015 at 2:09 PM, Sebastian Berg
> wrote:
> On Mo, 2015-11-23 at 13:31 -0800, Eli Bendersky wrote:
> > Hello,
> >
> >
> > I'm trying t
warnings; warnings.simplefilter("always")
The examples from the NEP at should all run fine, you can find the NEP
draft at:
https://github.com/numpy/numpy/pull/6256/files?short_path=01e4dd9#diff-01e4dd9d2ecf994b24e5883f98f789e6
I would be most happy about any comments or suggestions!
lice_object[2] = 3
>>> arr[slice_object]
and all of this code (numpy also has a lot of it), will probably have to
change the last line to be:
>>> arr[tuple(slice_object)]
So the implication of this might actually be more farther reaching then
one might think at first; or at le
and could raise an error which suggests to manually uncompress the file
when mmap_mode is given.
- Sebastian
> Can somebody confirm that?
>
> If I'm correct, the mmap_mode argument could be passed to the NpzFile
> class which could in turn perform the correct operation. One way to
&
e, but they can also be
unexpected/inconsistent when you start writing to the result array, so
you should not do it (and the array should preferably be read-only IMO,
as_strided itself does not do that).
But yes, there might be room for a function or so to make some stride
t
On Di, 2015-12-15 at 08:56 +0100, Sebastian Berg wrote:
> On Di, 2015-12-15 at 17:49 +1100, Juan Nunez-Iglesias wrote:
> > Hi,
> >
> >
> > I've recently been using the following pattern to create arrays of a
> > specific repeating value:
> >
[3.14159265358979329],
> dtype=numpy.float64
> )
> ```
> (differing the in the 18th overall digit) are reported equal by
> array_equal:
If you have some spare cycles, maybe you can open a pull request to add
np.isclose to the "See Also" section?
- Sebastian
ibutes
into 1.11.
So if you are interested in teaching and have suggestions for the names,
or have thoughts about subclasses, or... please share your thoughts! :)
Regards,
Sebastian
signature.asc
Description: This is a digitally signed message part
__
to do kahan summation, however, I think it
always assumes double precision for the p keyword argument, so as a work
around at least, you have to sum to convert to and normalize it as
double.
- Sebastian
>
> On Fri, Dec 18, 2015 at 7:00 PM, Nathaniel Smith
> wrot
hon, but
pythons buildin array.array should get you there pretty much as well. Of
course it requires the C typecode (though that should not be hard to
get) and does not support strings.
- Sebastian
>
> In my experience, it's several times faster than using a builtin list
> from Cython,
id] = -1 # mark invalids with -1
In [73]: B_index
Out[73]: array([ 2, 0, 1, -1])
Anyway, I guess the arrays would likely have to be quite large for this
to beat list comprehension. And maybe doing the searchsorted the other
way around could be faster, no idea.
- Sebastian
>
> Nicolas
>
On Mi, 2015-12-30 at 20:21 +0100, Nicolas P. Rougier wrote:
> In the end, I’ve only the list comprehension to work as expected
>
> A = [0,0,1,3]
> B = np.arange(8)
> np.random.shuffle(B)
> I = [list(B).index(item) for item in A if item in B]
>
>
> But Mark's and Sebastian's methods do not seem t
xis` with `np.newaxis is None` for the same thing.
`None` inserts a new axes, it is documented to do so in the indexing
documentation, so I will ask you to check it if you have more
questions.
If you want a noop, you should probably use `...` or `Ellipsis`.
- Sebastian
> __
Just a heads up, I am planning to put in Stephans pull request (more
info, see original mail below) as soon as some minor things are
cleared. So if you have any objections or better ideas for the name,
now is the time.
- Sebastian
On Mi, 2015-11-04 at 23:42 -0800, Stephan Hoyer wrote:
> I
e, we could maybe even mention the
combination in the examples (i.e. broadcast_to example?).
- Sebastian
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
l that, but to me it is not
obvious that it would be the best thing to get rid of scalars completly
(getting rid of the code duplication is a different issue).
- Sebastian
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
somewhere, it could be different):
>>> np.random.set_state(('MT19937', random.getstate()[1][:-1],
>>> random.getstate()[1][-1]))
Will enable you to draw the same numbers with random.uniform and
np.random.uniform.
- Sebastian
> Greg
>
> On Tue, Jan 19,
On Do, 2016-01-21 at 09:38 +, Robert Kern wrote:
> On Tue, Jan 19, 2016 at 5:35 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
> >
> > On Di, 2016-01-19 at 16:28 +, G Young wrote:
> > > In rand range, it raises an exception if low >= high
].
Anyway, should we attempt to do this? I admit that trying to make it
work, even *with* the change FutureWarnings suddenly pop up when you
make the warnings being given less often (I will guess that warning was
already issued at import time somewhere).
- Sebastian
[1] And at that a brand new future
On Do, 2016-01-21 at 16:15 -0800, Nathaniel Smith wrote:
> On Thu, Jan 21, 2016 at 4:05 PM, Sebastian Berg
> wrote:
> > Hi all,
> >
> > should we try to set FutureWarnings to errors in dev tests? I am
> > seriously annoyed by FutureWarnings getting lost all over for
On Do, 2016-01-21 at 16:51 -0800, Nathaniel Smith wrote:
> On Thu, Jan 21, 2016 at 4:44 PM, Sebastian Berg
> wrote:
> > On Do, 2016-01-21 at 16:15 -0800, Nathaniel Smith wrote:
> > > On Thu, Jan 21, 2016 at 4:05 PM, Sebastian Berg
> > > wrote:
> > > >
, ) # oh noe not ignore!
which also still prints other warnings.
- sebastian
> On Jan 21, 2016 5:00 PM, "Sebastian Berg" > wrote:
> > On Do, 2016-01-21 at 16:51 -0800, Nathaniel Smith wrote:
> > > On Thu, Jan 21, 2016 at 4:44 PM, Sebastian Berg
> >
agree with this, or would it be a major inconvenience?
- Sebastian
signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
appen.
Actually there is one more thing I might do. And that is issue a
UserWarning when new array quite likely points to invalid memory.
- Sebastian
[1] as_strided does not currently support arr.flags.writable = True for
its result array.
> On Sun, Jan 24, 2016 at 9:20 AM, Nathaniel Smith
ave experience of beeing badly bitten, myself. Would you think it is
fine if setting the flag to true would work in your case?
> On Sun, Jan 24, 2016 at 8:17 PM, Sebastian Berg
> wrote:
>
> > On So, 2016-01-24 at 13:00 +1100, Juan Nunez-Iglesias wrote:
> > > I'
On Mo, 2016-01-25 at 16:11 +0100, Sturla Molden wrote:
> On 23/01/16 22:25, Sebastian Berg wrote:
>
> > Do you agree with this, or would it be a major inconvenience?
>
> I think any user of as_strided should be considered a power user.
> This
> is an inherently dang
and the fact that it is hidden away with an underscore,
I am not sure we should prioritize that it would keep working,
considering that I am unsure that it ever did something very elegantly.
- Sebastian
> Jeremy
>
> From: NumPy-Discussion on behalf
> of Charles R Harris
> Reply-To:
warnings filters in tests. The current state makes finding warnings
given in our own tests almost impossible, in the best case they will
have to be fixed much much later when the change actually occurs, in
the worst case we never find our own real bugs.
So where to go? :)
- Sebastian
[1] I ne
er in the @ case (since it
has SSE2 optimization by using einsum, while np.dot does not do that).
Btw. thanks for all the work getting this done Chuck!
- Sebastian
>
>
> np.__version__
> Out[39]: '1.11.0.dev0+Unknown'
>
>
> %timeit A @ c
> 1 loops, best
dard about this, but I would expect that for a library
such as numpy, it makes sense to change. But, if downstream uses
warning filters with modules, we might want to reconsider for example.
- Sebastian
signature.asc
Description: This is a digitally signed me
are
already correct in this regard automatically.
Anyway, I still think it is worth it, even if in practice a lot of
warnings are such things as ufunc warnings from inside a python func.
And there is no real way to change that, that I am aware of, unless
maybe we add a warning_stacklevel argument to
On Sa, 2016-01-30 at 20:27 +0100, Derek Homeier wrote:
> On 27 Jan 2016, at 1:10 pm, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
> >
> > On Mi, 2016-01-27 at 11:19 +, Nadav Horesh wrote:
> > > Why the dot function/method is slower than @ on python 3
ou
can create a new array viewing the same data that is just larger? Since
you have the mmap, that would be creating a new view into it.
I.e. your "array" would be the memmap, and to use it, you always rewrap
it into a new numpy array.
Other then that, you would have to mess with th
nder the old iteration protocol
>
Numpy currently uses PySequence_Fast, but it has to do a two pass
algorithm (find dtype+dims), and the range is converted twice to list
by this call. That explains the speed advantage of converting to list
manually.
- Sebastian
> np.array(C())
> ==
ow, and sometimes it is hard to make good calls. It is a not
the easiest code base and any feedback or nudging is important. A new
release is about to come out, and if you feel it there is a serious
regression, we may want to push for fixing it (or even better, you may
have time to suggest a fix
1.11 here.
>
The reason for this part is that `arr[np.array([1])]` is very different
from `arr[np.array(1)]`. For `list[np.array([1])]` if you allow
`operator.index(np.array([1]))` you will not get equivalent results for
lists and arrays.
The normal array result cannot work for lists. We had ope
what I think of right now). That NEP and
code is sitting there after all with a decent chunk done and pretty
much working (though relatively far from finished with testing and
subclasses). Plus we have to make sure we get the details right, and
there a talk may really help too :).
- S
get huge numbers because you
happened to have a `low=0` in there. Especially your point 2) seems
confusing. As for 3) if I see `np.random.randint(high=3)` I think I
would assume [0, 3)....
Additionally, I am not sure the maximum int range is such a common need
anyway?
- Sebastian
> --
>
On Mi, 2016-02-17 at 22:10 +0100, Sebastian Berg wrote:
> On Mi, 2016-02-17 at 20:48 +, Robert Kern wrote:
> > On Wed, Feb 17, 2016 at 8:43 PM, G Young
> > wrote:
> >
> > > Josef: I don't think we are making people think more. They're
> > > a
doing fancy stuff, and I guess the idea is likely a bit too fancy for
wide application.
- Sebastian
> On Wed, Feb 17, 2016 at 9:27 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
> > On Mi, 2016-02-17 at 22:10 +0100, Sebastian Berg wrote:
> > > On Mi, 2016-02
sing 1 for all zeros in both
input and output shape for the -1 calculation), maybe we could do it. I
would like someone to think about it carefully that it would not also
allow some unexpected generalizations. And at least I am getting a
BrainOutOfResourcesError right now trying to figure that out :)
1 is to use 1 and the shape is 0
convert the 1 back to 0. But it is starting to sound a bit tricky,
though I think it might be straight forward (i.e. no real traps and
when it works it always is what you expect).
The main point is, whether you can design cases where the conversion
back to 0 hides
On Di, 2016-02-23 at 21:06 +0100, Sebastian Berg wrote:
> On Di, 2016-02-23 at 14:57 -0500, Benjamin Root wrote:
> > I'd be more than happy to write up the patch. I don't think it
> > would
> > be quite like make zeros be ones, but it would be along those
> >
nking maybe it shows something.
Unfortunately, I got the error below midway, I ran it before
successfully (with only minor obvious leaks due to things like module
wide strings) I think. My guess is, the error does not say much at all,
but I have no clue :) (running without track-origins now, maybe it
t; where a simplest available algorithm for detecting
> if arrays overlap was added. However, this is not yet
> utilized in ufuncs. An initial attempt to sketch what
> should be done is at https://github.com/numpy/numpy/issues/6272
> and issues referenced therein.
>
Since I like the id
[[20], [21, 21, 23, 24], [25, 26, 27], [28, 29]]],
> > dtype=object)
> >
>
> Apply [`np.stack`](http://docs.scipy.org/doc/numpy-1.10.0/reference/g
> enerated/numpy.stack.html#numpy.stack)
> to the result. It will merge the arrays the way you want.
>
On Mi, 2016-03-23 at 10:02 -0400, Joseph Fox-Rabinovitz wrote:
> On Wed, Mar 23, 2016 at 9:37 AM, Ibrahim EL MEREHBI
> wrote:
> > Thanks Eric. I already checked that. It's not what I want. I think
> > I wasn't
> > clear about what I wanted.
> >
> > I want to split each column but I want to do it
ner cases).
And if you search for what you want to do first, you may find faster
solutions easily, batteries included and all, there are a lot of tools
out there. The other point is, don't optimize much if you don't know
exactly what you need to optimize.
- Sebastian
>
> *
On Di, 2016-04-05 at 20:19 +0200, Sebastian Berg wrote:
> On Di, 2016-04-05 at 09:48 -0700, mpc wrote:
> > The idea is that I want to thin a large 2D buffer of x,y,z points
> > to
> > a given
> > resolution by dividing the data into equal sized "cubes" (i.e.
ode obviously looks easier then
before? With the `@` operator that was the case, with the "dimension
adding logic" I am not so sure, plus it seems it may add other
pitfalls.
- Sebastian
> >>> np.concatenate(([[1,2,3]], [4,5,6]))
> Traceback (most recent call last):
&g
gt; either. It's a
> common typo that catches intermediate users who know about
> broadcasting
> semantics but weren't keeping close enough track of the
> dimensionality of the
> different intermediate expressions in their code.
>
Yes, but as noted in
On Do, 2016-04-07 at 13:29 -0400, josef.p...@gmail.com wrote:
>
>
> On Thu, Apr 7, 2016 at 1:20 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
> > On Do, 2016-04-07 at 11:56 -0400, josef.p...@gmail.com wrote:
> > >
> > >
> >
> >
create your array in python with the
`order="F"` flag. NumPy will have a tendency to prefer C-order and uses
it as default though when doing something with an "F" ordered array.
That said, I have never used f2py, so these are just well founded
guesses.
- Sebastian
> Is the
am just poking at it here, could be all wrong.
- Sebastian
On Mi, 2016-04-20 at 19:22 +, Steve Mitchell wrote:
> When writing custom PyArray_ArrFuncs getitem() and setitem(), do I
> need to acquire the GIL, or has it been done for me already by the
> caller?
>
> --S
g as
"easy" but I would not guarantee all of them are, sometimes there are
unexpected difficulties or it is easy if you already know where to
look).
- Sebastian
> Regards,
> Saumyajit
>
> Saumyajit Dey
> Junior Undergraduate Student:
> Department of Computer Sci
ready achieves everything that the
fftcache was designed for and we could even just disable it as default?
The complexity addition is a bit annoying I must admit, on python 3
functools.lru_cache could be another option, but only there.
- Sebastian
>
> Cheers!
>
> Lion
> ___
difficulties with this?
Regards,
Sebastian
signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
ducts:
> > vecmul (similar to matmul, but probably too ambiguous)
> > dot_product
> > inner_prod
> > inner_product
> >
> I was using mulmatvec, mulvecmat, mulvecvec back when I was looking
> at this. I suppose the mul could also go in the middle, or maybe
> c
On Di, 2016-06-07 at 00:32 +0200, Jaime Fernández del Río wrote:
> On Mon, Jun 6, 2016 at 9:35 AM, Sebastian Berg s.net> wrote:
> > On So, 2016-06-05 at 19:20 -0600, Charles R Harris wrote:
> > >
> > >
> > > On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer
>
it windows is a bit odd.
> Anyway, hopefully that's not too off-topic.
> Best,
I agree, at least on python3 (the reason is that python 3, the subclass
thingy goes away, so it is less likely to break anything). I think we
could have a shot at this, it is quirky, but th
bviously even this is not 100%
> true, but I think it is the original intent.
>
Except for int types, which force a result type large enough to hold
the input value.
> My suspicion is that a better rule would be: *Python* types (int,
> float, bool) are treated as having an unspecified width, but all
> numpy
> types/dtypes are treated the same regardless of whether they're a
> scalar or not. So np.int8(2) * 2 would return an int8, but np.int8(2)
> * np.int64(2) would return an int64. But this is totally separate
> from
> the issues around **, and would require a longer discussion and
> larger
> overhaul of the typing system.
>
I agree with that. The rule makes sense for python types, but somewhat
creates oddities for numpy types and could probably just be made more
array like there.
- Sebastian
> -n
>
signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Just if you are curious why it is an error at the moment. We can't have
it be filled in by python to be not in-place (`M = M @ P` meaning), but
copying over the result is a bit annoying and nobody was quite sure
about it, so it was delayed.
- Sebastian
signature.asc
Description:
somewhat more sense would be to compare some of the
benchmarks `python runtests.py --bench` if you have airspeed velocity
installed. While not extensive, a lot of those things at least do test
more typical use cases. Though in any case I think the user should
probably just test some other thing.
- S
deprecate the 3D case, but then it is likely
more trouble then gain.
- Sebastian
> ```
> np.array(a, copy=False, ndim=n)
> ```
> or, for a list of inputs,
> ```
> [np.array(a, copy=False, ndim=n) for a in input_list]
> ```
>
> All the best,
>
> Marten
> _
n. If 1D, fill in `1` and `2`, if 2D, fill in only `2` (0D,
add everything of course).
However, I have my doubts that it is actually easier to understand then
to write yourself ;).
- Sebastian
> don't know how many dimensions are going to be added. If you knew,
> then you wouldn'
n send a few notes.
Have some nice last days at SciPy!
- Sebastian
> On Thursday, July 14, 2016, Nathan Goldbaum
> wrote:
> >
> >
> > On Thu, Jul 14, 2016 at 11:05 AM, Nathaniel Smith
> > wrote:
> > > Where is "the downstairs lobby"? I can
f all or most of numpy can be easily put into it.
Anyway, it seems like a great project to have as much support for type
annotations as possible. I have never used them, but with editors
picking up on these things it sounds like it could be very useful in
the future.
- Sebastian
> We're
Hi all,
I am still pondering whether or not (and if which days) to go to
EuroScipy. Who else is there and would like to discuss a bit or
whatever else?
- Sebastian
signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion
:
tmp = a[np.array([1,6,5])] + 1
a[np.array([1,6,5])] = tmp
this is done by python, without any interplay of numpy at all. Which is
different from `arr += 1`, which is specifically defined and translates
to `np.add(arr, 1, out=arr)`.
- Sebastian
> Best,
> ab
>
in the effort (I suppose it is likely
only a few places). I am not sure how easy they are on the C side, but probably
not difficult at all.
- Sebastian
> Thanks,
> Stuart
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@
caught earlier) -> convert to `intp`
array and do integer indexing.
Now you might wonder why, but probably it is quite simply because
boolean indexing was tagged on later.
- Sebastian
> In my attempt to reproduce the poster's results, I got the following
> warning:
> Future
501 - 600 of 892 matches
Mail list logo