Michael Katz yahoo.com> writes:
> Yes, thanks, np.in1d is what I needed. I didn't know how to find that.
Did you check in the documentation? If so, where did you check? Would you have
found it if it was in the 'See also' section of where()?
(http://docs.scipy.org/doc/numpy/reference/generated/n
>
> Ideally, I would like in1d to always be the right answer to this problem. It
> should be easy to put in an if statement to switch to a kern_in()-type
function
> in the case of large ar1 but small ar2. I will do some timing tests and make
a
> patch.
>
I uploaded a timing test and a pa
Rob Speer MIT.EDU> writes:
> It's not just about the rows: a 2-D datarray can also index by
> columns, an operation that has no equivalent in a 1-D array of records
> like your example.
rec['305'] effectively indexes by column. This is one the main attractions of
structured/record arrays.
_
Gael Varoquaux normalesup.org> writes:
> Let say that you have a dataset that is in a 3D array, where axis 0
> corresponds to days, axis 1 to hours of the day, and axis 2 to
> temperature, you might want to have the mean of the temperature in each
> day, which would be in current numpy:
>
>
Robert Kern gmail.com> writes:
>
> On Sun, Jul 11, 2010 at 11:36, Rob Speer mit.edu> wrote:
> >> But the utility of named indices is not so clear
> >> to me. As I understand it, these new arrays will still only be
> >> able to have a single type of data (one of float, str, int and so
> >> on).
Robert Kern gmail.com> writes:
>
> Please install Fernando's datarray package, play with it, read its
> documentation, then come back with objections or alternatives. I
> really don't think you understand what is being proposed.
>
Well the discussion has been pretty confusing. For mostly my
ben
Warren Weckesser enthought.com> writes:
>
> Benjamin Root wrote:
> > Brad, I think you are doing it the right way, but I think what is
> > happening is that the reshape() call on the sliced array is forcing a
> > copy to be made first. The fact that the copy has to be made twice
> > just wor
> This inconsistency is fixed in Numpy 1.4 (which included a major
> overhaul of chararrays). in1d will perform the auto
> whitespace-stripping on chararrays, but not on regular ndarrays of strings.
Great, thanks.
> Pyfits continues to use chararray since not doing so would break
> existing c
>
> This is an intentional "feature", not a bug.
>
> Chris
>
Ah, ok, thanks. I missed the explanation in the doc string because I'm using
version 1.3 and forgot to check the web docs.
For the record, this was my bug: I read a fits binary table with pyfits. One of
the table fields was a chara
I've been working with pyfits, which uses numpy chararrays. I've discovered the
hard way that chararrays silently remove trailing whitespace:
>>> a = np.array(['a '])
>>> b = a.view(np.chararray)
>>> a[0]
'a '
>>> b[0]
'a'
Note the string values stored in memory are unchanged. This behaviour ca
Shailendra gmail.com> writes:
>
> Hi All,
> I want to make a function which should be like this
>
> cordinates1=(x1,y1) # x1 and y1 are x-cord and y-cord of a large
> number of points
> cordinates2=(x2,y2) # similar to condinates1
> indices1,indices2= match_cordinates(cordinates1,cordinates2)
Eric Emsellem eso.org> writes:
> Hi
>
> I would like to test whether strings in a numpy S array are in a given list
but
> I don't manage to do so. Any hint is welcome.
>
> ===
> # So here is an example of what I would like to do
> # I have a
Nils Wagner iam.uni-stuttgart.de> writes:
> Hi David,
>
> you are right. It's a proprietary library.
> I found a header file (*.h) including prototype
> declarations of externally callable procedures.
>
> How can I proceed ?
Apparently you can use ctypes to access fortran libraries. See the f
Francesc Alted pytables.org> writes:
> In [10]: array = np.random.random((3, 1000))
>
> then the time drops significantly:
>
> In [11]: time (array[0]>x_min) & (array[0]y_min) &
> (array[1] CPU times: user 0.15 s, sys: 0.01 s, total: 0.16 s
> Wall time: 0.16 s
> Out[12]: array([False, Fals
Pierre GM gmail.com> writes:
>
> It has been suggested (ticket #1262) to change the default dtype=float to
dtype=None in np.genfromtxt.
> Any thoughts ?
>
I agree dtype=None should be default for the reasons given in the ticket.
How do we handle the backwards-incompatible change? A warning
Hi,
I've written some release notes (below) describing the changes to
arraysetops.py. If someone with commit access could check that these sound ok
and add them to the release notes file, that would be great.
Cheers,
Neil
New features
Improved set operations
~~
Charles R Harris gmail.com> writes:
> People import these functions -- yes, they shouldn't do that -- and the python
builtin versions are overloaded, causing hard to locate errors.
While I would love less duplication in the numpy namespace, I don't think the
small gain here is worth the pain of
gmail.com> writes:
> > Good point. With the return_inverse solution, is unique() guaranteed
> > to give back the same array of unique values in the same (presumably
> > sorted) order? That is, for two arrays A and B which have elements
> > only drawn from a set S, is all(unique(A) == unique(B))
John [H2O] gmail.com> writes:
> What I am trying to do (obviously?) is find all the values of X that fall
> within a time range.
>
> Specifically, one point I do not understand is why the following two methods
> fail:
>
> --> 196 ind = np.where( (t1 < Y[:,0] < t2) ) #same result
> with/
Pierre GM gmail.com> writes:
> What about
> 'formats':[eval(b) for b in event_format]
>
> Should it fail, try something like:
> dtype([(x,eval(b)) for (x,b) in zip(event_fields, event_format)])
>
> At least you force dtype to have the same nb of names & formats.
>
You could use
data = np.ge
Hi,
There was some discussion (e.g.
http://article.gmane.org/gmane.comp.python.numeric.general/30629)
about changes to the arraysetops module to consolidate the separate
unique/non-unique functions and rename setmember1d to in1d. There's a
patch that makes these changes in ticket 1133
(http://proj
David Cournapeau gmail.com> writes:
> >>> David Cournapeau wrote:
> >>> > (Continuing the discussion initiated in the neighborhood iterator
> >>> > thread)
> >>> > - Chuck suggested to drop python < 2.6 support from now on. I am
> >>> > against it without a very strong and detailed rationale,
David Cournapeau ar.media.kyoto-u.ac.jp> writes:
>
> (Continuing the discussion initiated in the neighborhood iterator thread)
>
> Hi,
>
> I would like to gather people's opinion on what to target for numpy
> 1.4.0.
> Are there any other features people would like to put into numpy for 1.
Robert Cimrman ntc.zcu.cz> writes:
> Hi Neil,
> > This sounds good. If you don't have time to do it, I don't mind having
> > a go at writing
> > a patch to implement these changes (deprecate the existing unique1d, rename
> > unique1d to unique and add the set approach from the old unique, and the
> > What about merging unique and unique1d? They're essentially identical for
> > an
> > array input, but unique uses the builtin set() for non-array inputs and so
> > is
> > around 2x faster in this case - see below. Is it worth accepting a speed
> > regression for unique to get rid of the func
Shivaraj M S gmail.com> writes:
>
> Hello,I just came across 'all' and 'alltrue' functions in fromnumeric.py
> They are one and same.IMHO,alltrue = all would be
> sufficient.Regards___
> Shivaraj--
There are other duplications too:
np.all
np.alltrue
np.any
np.sometrue
np.deg2rad
Robert Cimrman ntc.zcu.cz> writes:
>
> Hi,
>
> I am starting a new thread, so that it reaches the interested people.
> Let us discuss improvements to arraysetops (array set operations) at [1]
> (allowing non-unique arrays as function arguments, better naming
> conventions and documentation).
Robert Cimrman ntc.zcu.cz> writes:
> >> I'd really like to see the setmember1d_nu function in ticket 1036 get into
> >> numpy. There's a patch waiting for review that including tests:
> >>
> >> http://projects.scipy.org/numpy/ticket/1036
> >>
> >> Is there anything I can do to help get it applied
Thanks for the summary! I'm +1 on points 1, 2 and 3.
+0 for points 4 and 5 (assume_unique keyword and renaming arraysetops).
Neil
PS. I think you mean deprecate, not depreciate :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://ma
Robert Cimrman ntc.zcu.cz> writes:
> Anne Archibald wrote:
>
> > 1. add a keyword argument to intersect1d "assume_unique"; if it is not
> > present, check for uniqueness and emit a warning if not unique
> > 2. change the warning to an exception
> > Optionally:
> > 3. change the meaning of the fun
Hi all,
I posted this message couple of days ago, but gmane grouped it with an old
thread and it hasn't shown up on the front page. So here it is again...
I'd really like to see the setmember1d_nu function in ticket 1036 get into
numpy. There's a patch waiting for review that including tests:
h
Robert Cimrman ntc.zcu.cz> writes:
> Re-hi!
>
> Robert Cimrman wrote:
> > Hi all,
> >
> > I have added to the ticket [1] a script that compares the proposed
> > setmember1d_nu() implementations of Neil and Kim. Comments are welcome!
> >
> > [1] http://projects.scipy.org/numpy/ticket/1036
>
>
Andrea Gavana gmail.com> writes:
> this should be a very easy question but I am trying to make a
> script run as fast as possible, so please bear with me if the solution
> is easy and I just overlooked it.
That's weird, I was trying to solve exactly the same problem a couple of weeks
ago for
gmail.com> writes:
> setmember1d is very fast compared to the other solutions for large b.
>
> However, setmember1d requires that both arrays only have unique elements.
>
> So it doesn't work if, for example, your first array is a data vector
> with member ship in different groups (therefore n
Robert Kern gmail.com> writes:
> Do you mind if we just add you to the THANKS.txt file, and consider
> you as a "NumPy Developer" per the LICENSE.txt as having released that
> code under the numpy license? If we're dotting our i's and crossing
> our t's legally, that's a bit more straightforward
Robert Cimrman ntc.zcu.cz> writes:
> Hi Neil!
>
> I would like to add your function to arraysetops.py - is it ok? Just the
> name would be changed to setmember1d_nu, to follow the naming in the
> module (like intersect1d_nu).
>
> Thank you,
> r.
>
That's fine! There's no licence attached,
>
> - Should we have a separate User manual and a Reference manual, or only
> a single manual?
>
Are there still plans to write a 10 page 'Getting started with NumPy'
document? I think this would be very useful. Ideally a 'getting started'
document, the docstrings, and a reference manual is all
Ok, thanks.
I meant the amount of vertical space between lines of text - for
example, the gaps between parameter values and their description, or
the large spacing between both lines of text and and the text boxes in
the examples section. If other people agree it's a problem, I thought
the spacing
> A new copy of the reference guide is now available at
> http://mentat.za.net/numpy/refguide/
It there a reason why there's so much vertical space between all of the text
sections? I find the docstrings much easier to read in the editor:
http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.fromnum
The Win32 installer works on my Vista machine. There is one failed
test, but I think that's just because it tries to write somewhere it
doesn't have permission - I installed Python in /Program
Files/Python25/, and you need to be an administrator to write to
Program Files/.
Here's the error messag
Thanks Joe for the excellent post. It mirrors my experience with
Python and Numpy very eloquently, and I think it presents a good
argument against the excessive use of namespaces. I'm not so worried
about N. vs np. though - I use the same method Matthew Brett suggests.
If I'm going to use, say, sin
I'm just a numpy user, but for what it's worth, I would much prefer to
have a single numpy namespace with a small as possible number of
objects inside that namespace. To me, 'as small as possible' means
that it only includes the array and associated array manipulation
functions (searchsorted, where
Do we really need these functions in numpy? I mean it's just
multiplying/dividing the value by pi/180 (who knows why they're in the
math module..). Numpy doesn't have asin, acos, or atan either (they're
arcsin, arcos and arctan) so it isn't a superset of the math module.
I would like there to be
43 matches
Mail list logo