Maybe they should have written their code with **kwargs that consumes all
keyword arguments rather than assuming that no keyword arguments would be
added? The problem with this approach in general is that it makes writing
code unnecessarily convoluted.
On Tue, May 5, 2015 at 1:55 PM, Nathaniel Sm
ementation
to make it fast. What I want is for that conversion to be automated. I'm
still evaluating how to best achieve that.
On Tue, Apr 28, 2015 at 6:08 AM, Francesc Alted wrote:
> 2015-04-28 4:59 GMT+02:00 Neil Girdhar :
>
>> I don't think I'm asking for so much.
bits of numexpr that
I like with my code. For my purpose, this would have been the more ideal
design.
On Mon, Apr 27, 2015 at 10:47 PM, Nathaniel Smith wrote:
> On Apr 27, 2015 5:30 PM, "Neil Girdhar" wrote:
> >
> >
> >
> > On Mon, Apr 27, 2015 at 7:42 PM, Na
Wow, cool! Are there any users of this package?
On Mon, Apr 27, 2015 at 9:07 PM, Alexander Belopolsky
wrote:
>
> On Mon, Apr 27, 2015 at 7:14 PM, Nathaniel Smith wrote:
>
>> There's no way to access the ast reliably at runtime in python -- it gets
>> thrown away during compilation.
>
>
> The "
On Mon, Apr 27, 2015 at 7:42 PM, Nathaniel Smith wrote:
> On Mon, Apr 27, 2015 at 4:23 PM, Neil Girdhar
> wrote:
> > I was told that numba did similar ast parsing, but maybe that's not true.
> > Regarding the ast, I don't know about reliability, but take a look
Also, FYI: http://numba.pydata.org/numba-doc/0.6/doc/modules/transforms.html
It appears that numba does get the ast similar to pyautodiff and only get
the ast from source code as a fallback?
On Mon, Apr 27, 2015 at 7:23 PM, Neil Girdhar wrote:
> I was told that numba did similar ast pars
unreliable for some reason? From a
usability standpoint, I do think that's better than feeding in strings,
which:
* are not syntax highlighted, and
* require porting code from regular numpy expressions to numexpr strings
(applying a decorator is so much easier).
Best,
Neil
On Mon, Apr 27,
I've always wondered why numexpr accepts strings rather than looking a
function's source code, using ast to parse it, and then transforming the
AST. I just looked at another project, pyautodiff, which does that. And I
think numba does that for llvm code generation. Wouldn't it be nicer to
just a
On Fri, Apr 17, 2015 at 12:09 PM, wrote:
> On Fri, Apr 17, 2015 at 11:22 AM, Neil Girdhar
> wrote:
> >
> >
> > On Fri, Apr 17, 2015 at 10:47 AM, wrote:
> >>
> >> On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
> >> wrote:
> >
On Fri, Apr 17, 2015 at 12:09 PM, wrote:
> On Fri, Apr 17, 2015 at 11:22 AM, Neil Girdhar
> wrote:
> >
> >
> > On Fri, Apr 17, 2015 at 10:47 AM, wrote:
> >>
> >> On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
> >> wrote:
> >
This relationship between outer an dot only holds for vectors. For
tensors, and other kinds of vector spaces, I'm not sure if outer products
and dot products have anything to do with each other.
On Fri, Apr 17, 2015 at 11:11 AM, wrote:
> On Fri, Apr 17, 2015 at 10:59 AM, Sebastian Berg
> wrote
On Fri, Apr 17, 2015 at 10:47 AM, wrote:
> On Fri, Apr 17, 2015 at 10:07 AM, Sebastian Berg
> wrote:
> > On Do, 2015-04-16 at 15:28 -0700, Matthew Brett wrote:
> >> Hi,
> >>
> >
> >>
> >> So, how about a slight modification of your proposal?
> >>
> >> 1) Raise deprecation warning for np.outer f
Right.
On Thu, Apr 16, 2015 at 6:44 PM, Nathaniel Smith wrote:
> On Thu, Apr 16, 2015 at 6:37 PM, Neil Girdhar
> wrote:
> > I can always put np.outer = np.multiply.outer at the start of my code to
> get
> > what I want. Or could that break things?
>
> Please don
On Thu, Apr 16, 2015 at 6:32 PM, Nathaniel Smith wrote:
> On Thu, Apr 16, 2015 at 6:19 PM, Neil Girdhar
> wrote:
> > Actually, looking at the docs, numpy.outer is *only* defined for 1-d
> > vectors. Should anyone who used it with multi-dimensional arrays have an
> >
That sounds good to me.
I can always put np.outer = np.multiply.outer at the start of my code to
get what I want. Or could that break things?
On Thu, Apr 16, 2015 at 6:28 PM, Matthew Brett
wrote:
> Hi,
>
> On Thu, Apr 16, 2015 at 3:19 PM, Neil Girdhar
> wrote:
> > Actual
Actually, looking at the docs, numpy.outer is *only* defined for 1-d
vectors. Should anyone who used it with multi-dimensional arrays have an
expectation that it will keep working in the same way?
On Thu, Apr 16, 2015 at 10:53 AM, Neil Girdhar
wrote:
> Would it be possible to deprec
On Wed, Apr 15, 2015 at 6:08 PM, wrote:
> >> On Wed, Apr 15, 2015 at 5:31 PM, Neil Girdhar
> wrote:
> >>> Does it work for you to set
> >>>
> >>> outer = np.multiply.outer
> >>>
> >>> ?
> >>>
> >>> It
cache miss standpoint, I think p2 is better? Anyway, it might be worth
maybe coding to verify any performance advantages? Not sure if it should
be in numpy or not since it really should accept an iterable rather than a
numpy vector, right?
Best,
Neil
On Wed, Apr 15, 2015 at 12:40 PM, Jaime
I don't understand. Are you at pycon by any chance?
On Wed, Apr 15, 2015 at 6:12 PM, wrote:
> On Wed, Apr 15, 2015 at 6:08 PM, wrote:
> > On Wed, Apr 15, 2015 at 5:31 PM, Neil Girdhar
> wrote:
> >> Does it work for you to set
> >>
> >> outer =
Does it work for you to set
outer = np.multiply.outer
?
It's actually faster on my machine.
On Wed, Apr 15, 2015 at 5:29 PM, wrote:
> On Wed, Apr 15, 2015 at 7:35 AM, Neil Girdhar
> wrote:
> > Yes, I totally agree. If I get started on the PR to deprecate np.outer,
> &g
ith n=100 bins. I don't think it does O(n) computations per point. I
think it's more like O(log(n)).
Best,
Neil
On Wed, Apr 15, 2015 at 10:02 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Wed, Apr 15, 2015 at 4:36 AM, Neil Girdhar
> wrote:
>
>
Yeah, I'm not arguing, I'm just curious about your reasoning. That
explains why not C++. Why would you want to do this in C and not Python?
On Wed, Apr 15, 2015 at 1:48 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:
> On Tue, Apr 14, 2015 at 6:16 PM, Neil Girdha
22:18 -0400, Nathaniel Smith wrote:
> > I am, yes.
> >
> > On Apr 14, 2015 9:17 PM, "Neil Girdhar" wrote:
> > Ok, I didn't know that. Are you at pycon by any chance?
> >
> > On Tue, Apr 14, 2015 at 7:16 PM, Nathaniel Smith
> >
PM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> On Tue, Apr 14, 2015 at 4:12 PM, Nathaniel Smith wrote:
>>
>>> On Mon, Apr 13, 2015 at 8:02 AM, Neil Girdhar
>>> wrote:
>>> > Can I suggest that we instead add the P-squar
Ok, I didn't know that. Are you at pycon by any chance?
On Tue, Apr 14, 2015 at 7:16 PM, Nathaniel Smith wrote:
> On Tue, Apr 14, 2015 at 3:48 PM, Neil Girdhar
> wrote:
> > Yes, I totally agree with you regarding np.sum and np.product, which is
> why
> > I di
t;
>> On Mon, Apr 13, 2015 at 8:02 AM, Neil Girdhar
>> wrote:
>> > Can I suggest that we instead add the P-square algorithm for the dynamic
>> > calculation of histograms?
>> > (
>> http://pierrechainais.ec-lille.fr/Centrale/Option_DAD/IMPACT_fil
Yes, you're right. Although in practice, people almost always want
adaptive bins.
On Tue, Apr 14, 2015 at 5:08 PM, Chris Barker wrote:
> On Mon, Apr 13, 2015 at 5:02 AM, Neil Girdhar
> wrote:
>
>> Can I suggest that we instead add the P-square algorithm for the dynam
, which is the
resolution of the bins throughout the domain.
Best,
Neil
On Sun, Apr 12, 2015 at 4:02 AM, Ralf Gommers
wrote:
>
>
> On Sun, Apr 12, 2015 at 9:45 AM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> On Sun, Apr 12, 2015 at 12
Yes, I totally agree with you regarding np.sum and np.product, which is why
I didn't suggest np.add.reduce, np.multiply.reduce. I wasn't sure whether
cumsum and cumprod might be on the line in your judgment.
Best,
Neil
On Tue, Apr 14, 2015 at 3:37 PM, Nathaniel Smith wrote:
> On
ocumenting.
Similarly, cumprod is just np.multiply.accumulate.
Best,
Neil
On Sat, Apr 11, 2015 at 12:49 PM, Nathaniel Smith wrote:
> Documentation and a call to warnings.warn(DeprecationWarning(...)), I
> guess.
>
> On Sat, Apr 11, 2015 at 12:39 PM, Neil Girdhar
> wrote:
> > I
run took 25.59 times longer than the fastest. This could mean
that an intermediate result is being cached
100 loops, best of 3: 834 ns per loop
On Tue, Apr 14, 2015 at 7:42 AM, Neil Girdhar wrote:
> Okay, but by the same token, why do we have cumsum? Isn't it iden
Hello,
Is this desired behaviour or a regression or a bug?
http://stackoverflow.com/questions/26497656/how-do-i-align-a-numpy-record-array-recarray
Thanks,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
> Hi,
>
> We came across this bug while using np.cross on 3D arrays of 2D vectors.
>
> What version of numpy are you using? This should already be solved in numpy
> master, and be part of the 1.9 release. Here's the relevant commit,
> although the code has been cleaned up a bit in later ones:
>
> Hi,
>
> We came across this bug while using np.cross on 3D arrays of 2D vectors.
>
> What version of numpy are you using? This should already be solved in numpy
> master, and be part of the 1.9 release. Here's the relevant commit,
> although the code has been cleaned up a bit in later ones:
> h
Hi,
We came across this bug while using np.cross on 3D arrays of 2D vectors.
The first example shows the problem and we looked at the source for np.cross
and believe we found the bug - an unnecessary swapaxes when returning the
output (comment inserted in the code).
Thanks
Neil
# Example
Is this what I want? https://github.com/numpy/numpy/pull/3987
On Sun, Oct 27, 2013 at 9:42 PM, Neil Girdhar wrote:
> Yeah, I realized that I missed that and figured it wouldn't matter since
> it was my own master and I don't plan on making other changes to numpy. If
> you
, Oct 27, 2013 at 9:38 PM, Charles R Harris wrote:
>
>
>
> On Sun, Oct 27, 2013 at 7:23 PM, Neil Girdhar wrote:
>
>> This is my first code review request, so I may have done some things
>> wrong. I think the following URL should work?
>> https://github.com/MisterShe
This is my first code review request, so I may have done some things wrong.
I think the following URL should work?
https://github.com/MisterSheik/numpy/compare
Best,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
Since I am trying to add a "printoptions" context manager, I would like to
test it. Should I add tests, or can I somehow use it from an ipython shell?
On Sun, Oct 27, 2013 at 7:12 PM, Charles R Harris wrote:
>
>
>
> On Sun, Oct 27, 2013 at 4:59 PM, Neil Girdhar wrote
Ah, sorry, didn't see that I can do that from runtests!! Thanks!!
On Sun, Oct 27, 2013 at 7:13 PM, Neil Girdhar wrote:
> Since I am trying to add a "printoptions" context manager, I would like to
> test it. Should I add tests, or can I somehow use it from an ipython she
How do I test a patch that I've made locally? I can't seem to import numpy
locally:
Error importing numpy: you should not try to import numpy from
its source directory; please exit the numpy source tree, and
relaunch
your python intepreter from there.
_
, numpy.printoptions,
etc. could expose the dictionary directly. This would make the get methods
redundant.
Best,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ine a function between() - e.g.
https://bitbucket.org/nhmc/pyserpens/src/4e2cc9b656ae/utilities.py#cl-88
Then you can use
between(a, 4, 8)
instead of
(4 < a) & (a < 8),
which I find less readable and more difficult to type.
Neil
___
Num
y/reference/generated/numpy.where.html)
I ask because people often post to the list needing in1d() after not being
able to find it via the docs, so it would be nice to add references in
the places people go looking for it.
Neil
___
NumPy-Discussi
Hi,
If someone with commit access has the chance, could they take a
look at ticket 1603:
http://projects.scipy.org/numpy/ticket/1603
and apply it if it looks ok? It speeds up in1d(a, b) a lot for
the common use case where len(b) << len(a).
Thanks,
(): attribute name must be string
This was filed and fixed during the python bug weekend
(http://bugs.python.org/issue10465), so it shouldn't be a problem with
a current 3.2 checkout.
--
Neil Muller
drnlmul...@gmail.com
I've got a gmail account. Why haven't I become cool?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
oops, I meant to save my post but I sent it instead - doh!
In the end, the question was; is worth adding start= and stop= markers into
loadtxt to allow grabbing sections of a file between two known headers? I
imagine it's something that people come up against regularly.
Thanks,
Hi,
I been looking around and could spot anything on this. Quite often I want to
read a homogeneous block of data from within a file. The skiprows option is
great for missing out the section before the data starts, but if there is
anything below then loadtxt will choke. I wondered if there w
my laptop.
If someone with commit access could take a look and and apply it if ok, that
would be great.
Thanks,
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2). In this case the current in_1d is pretty much
always faster than kern_in().
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Rob Speer MIT.EDU> writes:
> It's not just about the rows: a 2-D datarray can also index by
> columns, an operation that has no equivalent in a 1-D array of records
> like your example.
rec['305'] effectively indexes by column. This is one the main attractions of
structured/record arrays.
_
prone. It would be much nicer to be able
> to write:
>
> data.ax_day.mean(axis=0)
> data.ax_hour.mean(axis=0)
>
Thanks, that's a really nice description. Instead of
data.ax_day.mean(axis=0)
I think it would be clearer to do somethi
Robert Kern gmail.com> writes:
>
> On Sun, Jul 11, 2010 at 11:36, Rob Speer mit.edu> wrote:
> >> But the utility of named indices is not so clear
> >> to me. As I understand it, these new arrays will still only be
> >> able to have a single type of data (one of float, str, int and so
> >> on).
ndo or Gael could share an example where arrays with named
axes and indices are especially useful, for the peanut gallery's
benefit?
Cheers, Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
a bad idea to have a function that returns a
> > min/max tuple?
>
> +1. More than once I've wanted exactly such a function.
>
I also think this would be useful. For what it's worth, IDL also has a function
called minmax() that does this
My problem was I didn't know I needed to get around it :) But thanks for the
suggestion, I'll use that in future when I need to switch between chararrays
and
ndarrays.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
27;.
What is the best way to ensure this doesn't happen to other people? We could
change the array set operations to special-case chararrays, but this seems like
an ugly solution. Is it possible to change something in pyfits to avoid this?
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
#x27;
Note the string values stored in memory are unchanged. This behaviour caused a
bug in a program I've been writing, and seems like a bad idea in general. Is it
intentional?
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
you
might
be able to adapt it:
http://bitbucket.org/nhmc/pyserpens/src/tip/coord.py
The matching function starts on line 166.
Disclaimer: I haven't looked at the kdtree code yet, that might be a better
approach.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
False], dtype=bool)
Be careful of whitespace when doing string comparisons; "tutu " != "tutu" (I've
been burnt by this in the past).
in1d() is only in more recent versions of numpy (1.4+). If you can't upgrade,
you can cut and paste the in1d() and unique() ro
ss fortran libraries. See the first
paragraph of:
http://www.sagemath.org/doc/numerical_sage/ctypes.html
You may have to convert the .a library to a .so library.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sci
Francesc Alted pytables.org> writes:
> In [10]: array = np.random.random((3, 1000))
>
> then the time drops significantly:
>
> In [11]: time (array[0]>x_min) & (array[0]y_min) &
> (array[1] CPU times: user 0.15 s, sys: 0.01 s, total: 0.16 s
> Wall time: 0.16 s
> Out[12]: array([False, Fals
ible change? A warning in the next
release, then change it in the following release?
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 2010-02-02 20:31 , Robert Kern wrote:
> On Tue, Feb 2, 2010 at 20:23, Neil Martinsen-Burrell
> wrote:
>
>> I don't understand Travis's comment that "datetime is just a
>> place-holder for data".
>
> That's not a direct quote and is a m
line
commit, right?) and releasing an ABI compatible 1.4.1. That should
probably be accompanied by a roadmap hashed out at this year's SciPy
conference that takes us up through adding datetime, Python 3 and a
possible major rewrite (that will add the indirection necessary to make
future A
629 ,
868.08467329, 52.38320341, 12063.64687812, 29930.60881439,
12236.06517635, 10221.89370909, 2414.9534157 , 13039.6113439 ,
22967.67537214, 15140.04385727, 2639.67251757, 26461.80402013,
3218.73142713, 15963.71209963, 11755.35677893, 11551.31295568,
list, posted my question, and then went back to dissertation writing
for a few days. When I looked up, there were 18 answers.
I'll try getting python from python.org and/or building it all from scratch.
Thanks again,
Neil
___
NumPy-Discussion
y ideas how to debug this?
Some info:
wei...@neil-weisenfeld-macbook-pro:~
[507]$ which python
/usr/bin/python
wei...@neil-weisenfeld-macbook-pro:~
[508]$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyrigh
., 3., 1.])
In [6]: U[idx] += dU
In [7]: U
Out[7]: array([ 1.1, 2.1, 3.1, 4. ])
Ideally U would end up as array([ 1.2, 2.1, 3.1, 4. ])
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ber of axes than to return a confusing result.
>
> Hm, now that I actually try my example
>
> min((5000,), 4)
>
> it fails with an axis out of bounds error. I presume there's a reason
> why a 0-D array gets special treatment?
>
In [16]: import numpy as np
I
Hi,
I've written some release notes (below) describing the changes to
arraysetops.py. If someone with commit access could check that these sound ok
and add them to the release notes file, that would be great.
Cheers,
Neil
New features
Improved set opera
ttached version of fortranfile.py should do
the trick. Let me know if it does or doesn't help.
[As an aside, fortranfile.py is code that I've written that isn't part
of Numpy and perhaps the right place for any discussions of it is off-list.]
-Neil
# Copyright 2008, 2009 Neil
xis? (Fixing this
> would also make misuse of np.min and np.max more difficult.)
I think it would be better to fix this issue. np.min(3,2) should also give
"ValueError: axis(=2) out of bounds". Fixing this also removes any possibility
of generating hard-to-find errors by overwriting the
turn_inverse=True but for some codepath it uses
> set.
>
unique always sorts, even if it uses set. So I'm pretty sure
all(unique(A) == unique(B)) is guaranteed.
Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
are of stripping off and
error-checking the record length information that fortran unformatted
I/O often uses. I don't have much opportunity to work on Fortran
unformatted I/O these days, but I would gladly accept any contributions.
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
elements,
not 7. (Whence the curse of dimensionality...)
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
in the past.
The one from http://r.research.att.com/tools/ is much better and is
the recommended one for SciPy.
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3]]), array([[ 6, 7, 8, 9, 10],
[ 6, 7, 8, 9, 10],
[ 6, 7, 8, 9, 10]]), array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]]))
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
elp other Numpy users with this
issue, you can edit the documentation in the online documentation editor
at http://docs.scipy.org/numpy/docs/numpy-docs/user/index.rst
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
lance that we should be aware of when
introducing changes. It makes sense that we will all see this balance
differently, but I think that we need to acknowledge that this is the
essential tension in removing cruft incompatibly.
-Neil
___
NumPy-
for y in range(2):
for z in range(3):
Cprime[x,y,z] = A[x,y] + B[x,z]
:
In [13]: (C == Cprime).all()
Out[13]: True
-Neil
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sci
found out, b[:0] gives you an empty
slice. Negative indices are just syntactic sugar for (N+1)-n where N
is the length of the list and that should work for n=0 as well:
>>> b = [1,2,3,4,5]
>>> b[:0]
[]
>>> b[:len(b)+1-0]
[1, 2, 3, 4, 5]
-Neil
___
t should appear
> inside the []. The numbers are no problem, but I'm having trouble
> with the ellipsis and colon.
The ellipsis is a built-in python constant called Ellipsis. The colon
is a slice object, again a python built-in, called with None as an
argument. So, z[...,2,:] =
ner. While there are some
basic scientific features such as FFTs in NumPy, these appear in more
detail in SciPy. If you can give more specifics on what features you
would be interested in, we can offer more help about which package
contains those features.
-N
). As very large C extensions, the work of
porting Numpy and Scipy to Python 3.x hasn't been undertaken, although
it will be in time. If you have a particular situation in which you
need to upgrade, please let us know more about it, so that the NumPy
developers can target their porting effor
converted to (a < x) and (x < b), which is why
they don't work either. There is a proposal to enable overloadable 'and' and
'or' methods (http://www.python.org/dev/peps/pep-0335/), but I don't think it's
ever got enough support to be accepted.
Also, if you do
amp; formats.
>
You could use
data = np.genfromtxt(filename, names=listofname, dtype=None)
Then you just need to specify the column names, and not the dtypes (they are
inferred from the data). There are probably backwards compatibility issues, but
it would be great if dtype=N
ively, we could deprecate unique1d and transfer its
functionality to function_base.unique, but since many of the setops
functions use unique, arraysetops seems a more natural place for it.
Does anyone have any thoughts about this?
Neil
___
Numpy-discu
ready settled on HDF5, PyTables would be a natural choice, since it
can process on-disk datasets as if they were NumPy arrays (which might be
nice if you don't have all 50GB of memory).
-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy
dy familiar with Python's idiosyncracies, but
it's important for people first coming to the language.
Print becoming a function would have been a pain for interactive work, but
happily ipython auto-parentheses takes care of that.
You could argue that moving to python 3 isn't attractive because there isn't
any scientific library support, but then that's because numpy hasn't been
ported to python 3 yet ;)
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
would like to put into numpy for 1.4.0 ?
>
I'd like to get the patch in ticket 1113
(http://projects.scipy.org/numpy/ticket/1133), or some version of it, into 1.4.
It would also be great to get all the docstrings David Goldsmith and others are
Robert Cimrman ntc.zcu.cz> writes:
> Hi Neil,
> > This sounds good. If you don't have time to do it, I don't mind having
> > a go at writing
> > a patch to implement these changes (deprecate the existing unique1d, rename
> > unique1d to unique and add t
in range(128)
>
> It disappears after increasing the array size, or the integer size.
> In [39]: np.__version__
> Out[39]: '1.4.0.dev7047'
>
> r.
Weird! From the error message, it looks like a problem with ipython's timeit
function rather than unique. I can't reproduce it on my machine
(numpy 1.4.0.dev, r7059; IPython 0.10.bzr.r1163 ).
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
np.all
np.alltrue
np.any
np.sometrue
np.deg2rad
np.radians
np.rad2deg
np.degrees
And maybe more I've missed.
Can we deprecate alltrue and sometrue, and either deg2rad/rad2deg, or
radians/degrees? They would be deprecated in 1.4 and presumably removed in 1.5.
Neil
__
On 2009-06-16 16:05 , Robert wrote:
> Neil Martinsen-Burrell wrote:
>> On 06/16/2009 02:18 PM, Robert wrote:
>>>>>> n = 10
>>>>>> xx = np.ones(n)
>>>>>> yy = np.arange(n)
>>>>>> aa = np.c
, 1.],
[ 2., 1.],
[ 1., 2.],
[ 2., 2.],
[ 1., 3.],
[ 2., 3.],
[ 1., 4.],
[ 2., 4.],
[ 1., 5.],
[ 2., 5.],
[ 1., 6.],
[ 2., 6.],
[ 1., 7.],
[ 2., 7.],
[ 1., 8.],
[ 2., 8.],
[ 1., 9.],
[ 2., 9.]])
Using this syntax, interleave could be a one-liner.
-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
or: NULL result without error in PyObject_Call
>
> vCont1.resize((5,10),True,False)
> # TypeError: an integer is required
>
> Can anyone tell me how this "resize" function works ?
> I already checked the help file :
> http://docs.scipy.org/doc/numpy/reference/generated/nump
still clear.
What about merging unique and unique1d? They're essentially identical for an
array input, but unique uses the builtin set() for non-array inputs and so is
around 2x faster in this case - see below. Is it worth accepting a speed
regression for unique to get rid of the function dupli
;
> r.
Great - thanks! People often post to the list asking for this functionality, so
it's nice to get it into numpy (whatever it ends up being called).
Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thanks for the summary! I'm +1 on points 1, 2 and 3.
+0 for points 4 and 5 (assume_unique keyword and renaming arraysetops).
Neil
PS. I think you mean deprecate, not depreciate :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
1 - 100 of 134 matches
Mail list logo