Hi all,
I'd like to set the data type for what numpy.where creates. For example:
import numpy as N
N.where(a >= 5, 5, 0)
creates an integer array, which makes sense.
N.where(a >= 5, 5.0, 0)
creates a float64 array, which also makes sense, but I'd like a float32
array, so I tried:
N.where(a
Andrew Straw wrote:
> Here's one that seems like
> it might work, but I haven't tried it yet:
> http://software.jessies.org/terminator
Now if only there was a decent terminal emulator for Windows that didn't
use cygwin...
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Respons
There's probably a better forum for this conversation, but...
Barry Wark wrote:
> Perhaps we should consider two use cases: interactive use ala Matlab
> and larger code bases.
A couple key points -- yes, interactive use is different than larger
code bases, but I think it's a "Bad Idea" to promit
Ray Schumacher wrote:
> I agree with others that ctypes might be your best path.
Pyrex is a good bet too:
http://www.scipy.org/Cookbook/Pyrex_and_NumPy
The advantage with pyrex is that you don't have to write any C at all.
You will have to use a compiler that is compatible with your Python buil
On Sun, Mar 4, 2012 at 2:18 PM, Luis Pedro Coelho wrote:
> At least last time I read up on it, cython was not able to do multi-type code,
> i.e., have code that works on arrays of multiple types. Does it support it
> now?
The Bottleneck project used some sort of template system to generate
multip
On Thu, Mar 1, 2012 at 10:58 PM, Jay Bourque wrote:
> 1. Loading text files using loadtxt/genfromtxt need a significant
> performance boost (I think at least an order of magnitude increase in
> performance is very doable based on what I've seen with Erin's recfile code)
> 2. Improved memory usag
On Wed, Mar 14, 2012 at 9:25 AM, Pauli Virtanen wrote:
> Or, maybe the whole Fortran stuff can be run in a separate process, so
> that crashing doesn't matter.
That's what I was going to suggest -- even if you can get it not to
crash, it may well be in a bad state -- memory leaks, and who know
wh
On Tue, Mar 20, 2012 at 5:13 AM, Matthieu Rigal wrote:
> In fact, I was hoping to have a less memory and more speed solution.
which do often go together, at least for big problems -- pushingm
emory around often takes more time than the computation itself.
> At the end, I am rather interested by
Warren et al:
On Wed, Mar 7, 2012 at 7:49 AM, Warren Weckesser
wrote:
> If you are setup with Cython to build extension modules,
I am
> and you don't mind
> testing an unreleased and experimental reader,
and I don't.
> you can try the text reader
> that I'm working on: https://github.com/Warr
On Thu, Mar 29, 2012 at 7:55 AM, Tim Cera
>> I think there is also a question of using a prefix pad_xxx for the
>> function names as opposed to pad.xxx.
"Namespaces are one honking great idea -- let's do more of those!"
> If I had it as pad.mean, pad.median, ...etc. then someone could
>
> f
On Fri, Mar 30, 2012 at 10:57 AM, mark florisson
wrote:
> Although the segfault was caused by a bug in NumPy, you should
> probably also consider using Cython, which can make a lot of this pain
> and boring stuff go away.
Is there a good demo/sample somewhere of an ndarray subclass in Cython?
So
On Sun, Apr 1, 2012 at 8:19 AM, Tom Aldcroft
wrote:
> You might try something like below (untested code, just meant as
> pointing in the right direction):
>
> self.resize(len(self) + len(v1), refcheck=False)
> self[len(self):] = v1
>
> Setting refcheck=False is potentially dangerous since it means
On Mon, Apr 2, 2012 at 2:25 AM, Nathaniel Smith wrote:
> To see if this is an effect of numpy using C-order by default instead of
> Fortran-order, try measuring eig(x.T) instead of eig(x)?
Just to be clear, .T re-arranges the strides (Making it Fortran
order), butyou'll have to make sure your ari
On Tue, Apr 3, 2012 at 6:06 AM, Holger Herrlich
> Hi, I plan to migrate core classes of an application from Python to C++
> using SWIG,
if you're using SWIG, you may want the numpy.i SWIG interface files,
they can be handy.
but I probably wouldn't use SWIG, unless:
- you are already a SWIG ma
On Tue, Apr 3, 2012 at 4:45 PM, srean wrote:
> From the sourceforge forum it
> seems the new Blitz++ is quite competitive with intel fortran in SIMD
> vectorization as well, which does sound attractive.
you could write Blitz++ code, and call it from Cython. That may be a
bit klunky at this point,
On Wed, Apr 4, 2012 at 12:55 PM, srean wrote:
>> One big issue that I had with weave is that it compile on the fly. As a
>> result, it makes for very non-distributable software (requires a compiler
>> and the development headers installed), and leads to problems in the long
> I do not know much
On Wed, Apr 4, 2012 at 4:17 PM, Abhishek Pratap
> close to a 900K points using DBSCAN algo. My input is a list of ~900k
> tuples each having two points (x,y) coordinates. I am converting them
> to numpy array and passing them to pdist method of
> scipy.spatial.distance for calculating distance betw
2012/4/8 Hänel Nikolaus Valentin :
http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html
that looks like it hasn't been updated since 2006 -- I"d say that
makes it a non-starter
The new numpy-boost project looks promising, though.
> which was also menti
2012/4/9 Hänel Nikolaus Valentin :
http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html
>>
>> that looks like it hasn't been updated since 2006 -- I"d say that
>> makes it a non-starter
>
> Yeah, thats what I thought... Until I found it in several produc
On Mon, Apr 16, 2012 at 7:46 PM, Travis Oliphant wrote:
> As Chuck points out, 3 more pointers is not necessarily that big of a deal if
> you are talking about a large array (though for small arrays it could matter).
yup -- for the most part, numpy arrays are best for workign with large
data set
On Fri, Apr 20, 2012 at 11:39 AM, Dag Sverre Seljebotn
wrote:
> Oh, right. I was thinking "small" as in "fits in L2 cache", not small as
> in a few dozen entries.
or even two or three entries.
I often use a (2,) or (3,) numpy array to represent an (x,y) point
(usually pulled out from a Nx2 array
On Mon, Apr 23, 2012 at 12:57 PM, Nathaniel Smith wrote:
> Right, this part is specifically about ABI compatibility, not API
> compatibility -- segfaults would only occur for extension libraries
> that were compiled against one version of numpy and then used with a
> different version.
Which make
On Mon, Apr 23, 2012 at 3:08 PM, Travis Oliphant wrote:
> Right now we are trying to balance difficult things: stable releases with
> experimental development.
Perhaps a more formal "development release" system could help here.
IIUC, numpy pretty much has two things: the latest release (and pas
On Mon, Apr 23, 2012 at 11:18 PM, Ralf Gommers
>> Perhaps a more formal "development release" system could help here.
>> IIUC, numpy pretty much has two things:
> This is a good idea - not for development releases but for master. Building
> nightly/weekly binaries would help more people try out n
On Thu, May 10, 2012 at 2:38 AM, Dag Sverre Seljebotn
wrote:
> What would serve me? I use NumPy as a glorified "double*".
> all I want is my glorified
> "double*". I'm probably not a representative user.)
Actually, I think you are representative of a LOT of users -- it
turns, out, whether Jim Hu
Anthony,
Thanks for looking into this. A few other notes about fromstring() (
and fromfile() ).
Frankly they haven't gotten much love -- they are, as you have seen,
less than optimized, and kind of buggy (actually, not really buggy,
but not robust in the face of malformed input -- and they give r
On Tue, May 22, 2012 at 6:33 AM, Chao YUE wrote:
> Just in case some one didn't know this. Assign a float number to an integer
> array element will always return integer.
right -- numpy arrays are typed -- that's one of the points of them --
you wouldn't want the entire array up-cast with a sing
On Tue, May 22, 2012 at 1:07 PM, Dan Goodman wrote:
> I think it would be useful to have an example of a completely
> 'correctly' subclassed ndarray that handles all of these issues that
> people could use as a template when they want to subclass ndarray.
I think this is by definition impossible
7;t work, try:
http://www.resumeware.net/gdns_rw/gdns_web/job_search.cfm
and search for job ID: 199765
You can also send questions about employment issues to:
Susan Bowley: susan.bow...@gdit.com
And questions about the nature of the work to:
Chris Barker: chris.bar...@noaa.gov
--
>> On Fri, Jun 1, 2012 at 10:46 AM, Chris Withers
>> > Any reason why this:
>> >
>> > >>> import numpy
>> > >>> numpy.zeros(10)[-123]
>> > Traceback (most recent call last):
>> > File "", line 1, in
>> > IndexError: index out of bounds
>> >
>> > ...could say this:
>> >
>> > >>> numpy.zeros(
On Mon, Jun 4, 2012 at 9:21 AM, bob tnur wrote:
> Hello every body. I am new to python.
> How to remove any row or column of a numpy matrix whose sum is 3.
> To obtain and save new matrix P with (sum(anyrow)!=3 and sum(anycolumn)!=3
> elements.
well, one question is -- do you want to remove the p
On Mon, Jun 4, 2012 at 9:38 AM, Robert Kern wrote:
> # Now use the numpy.delete() function to get the matrix
> # with those rows and columns removed from the original matrix.
> P = np.delete(M, bad_rows, axis=0)
> P = np.delete(P, bad_cols, axis=1)
ah yes, forgot about np.delete -- that is a
HAVe you discovered the numpy.i interface files?
I haven't done SWIG in a while, but they should take care of at least
some of this for you.
They used to be distributed with numpy (in docs?), but some googling
should find then in any case.
-Chris
On Mon, Jun 4, 2012 at 2:00 PM, Gideon Simpson
On Mon, Jun 4, 2012 at 11:10 AM, Patrick Redmond wrote:
> Here's how I sorted primarily by field 'a' descending and secondarily by
> field 'b' ascending:
could you multiply the numeric field by -1, sort, then put it back --
somethign like:
data *- -1
data_sorted = np.sort(data, order=['a','b'])
On Thu, Jun 28, 2012 at 9:06 AM, Pierre Haessig
> On the other hand, just like srean mentionned, I think I also misused
> the "c[:] = a+b" syntax.
> I feel it's a bit confusing since this way of writing the assignment
> really feels likes it happens inplace. Good to know it's not the case.
well,
On Wed, Jun 27, 2012 at 2:38 PM, wrote:
> How how can I perform matrix multiplication of two vectors?
> (in matlab I do it like a*a')
np.outer is a bit cleaner, I suppose, but you can exactly the same
thing you do with matlab if a is a column (single column 2-d array):
In [40]: a = np.arange(4)
On Mon, Jul 2, 2012 at 12:17 PM, Andrew Dalke wrote:
> In this email I propose a few changes which I think are minor
> and which don't really affect the external NumPy API but which
> I think could improve the "import numpy" performance by at
> least 40%.
+1 -- I think I remember that thread -- a
On Thu, Jul 12, 2012 at 2:32 PM, Chao YUE wrote:
> numpy ndarray indexing in the function. Like when I call:
>
> func(a,'1:3,:,2:4'), it knows I want to retrieve a[1:3,:,2:4], and
> func(a,'1:3,:,4') for a[1:3,:,4] ect.
why do the string packing/unpacking? why not use an interface much
like the
On Sun, Jul 15, 2012 at 2:51 PM, Pierre GM wrote:
> A basic warning, though: you
> don't want to overwrite Mac OS X's own numpy, ...
Which is one of the reasons many of us recommend installing the
pyton,org python and leaving Apple's alone. And why the standard
numpy/scipy binaries are built for
On Mon, Jul 16, 2012 at 2:50 PM, Ralf Gommers
wrote:
>> Working lazy imports would be useful to have. Ralf is opposed to the idea
> Note that my being opposed is because the benefits are smaller than the
> cost. If there was a better reason than shaving a couple of extra ms off the
> import time
On Mon, Jul 16, 2012 at 8:37 PM, Travis Oliphant wrote:
>--- systems that try to "freeze" Python programs were
> particularly annoyed at SciPy's lazy import mechanism.
That's ironic to me -- while the solution to a lot of "freezing"
problems is to include everything including the kitchen sink --
On Fri, Jul 20, 2012 at 1:17 PM, OC wrote:
>> numpy.complex is just a reference to the built in complex, so only works
>> on scalars:
> What is the use of storing the "complex()" built-in function in the
> numpy namespace, when it is already accessible from everywhere?
for consitancy with teh r
On Mon, Aug 6, 2012 at 8:51 PM, Tom Krauss wrote:
> I got a new job, and a new mac book pro on which I just installed Mac OS X
> 10.8.
congrats -- on the job, and on an employer that gets you a mac!
> I need to run SWIG to generate a shared object from C++ source that works
> with numpy.i. I'm
On Tue, Aug 7, 2012 at 5:00 AM, Pierre GM wrote:
> It's generally considered a "very bad idea"(™) to install NumPy on a recent
> OSX system without specifying a destination. By default, the process will
> try to install on /Library/Frameworks/Python, overwriting the pre-installed
> version of Num
It depends a bit on how you installed it, but for the most part you
should simiply be able to delete the numpy directory in site_packages.
-Chris
On Thu, Aug 9, 2012 at 1:04 AM, wrote:
> Thanks to everybody.
>
> ___
> NumPy-Discussion mailing list
>
On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that,
unlike 32-bit builds, they can't be made with open
> source compilers on Windows. So unless we're okay with that,
Why does it have to be built with open-source compilers? we're
building against the python.org python, yes? Which
ravis
>
>
>
> On Aug 20, 2012, at 5:28 PM, Chris Barker wrote:
>
>> On Sun, Aug 19, 2012 at 11:39 AM, Ralf Gommers > The problem is that,
>> unlike 32-bit builds, they can't be made with open
>>> source compilers on Windows. So unless we're okay with that
Todd,
The short version is: you can't do that. -- Jython uses the JVM, numpy
is very, very tied into the CPython runtime.
This thread is a bit old, but think still holds:
http://stackoverflow.com/questions/3097466/using-numpy-and-cpython-with-jython
There is the junumeric project, but it doesn'
On Sun, Aug 26, 2012 at 8:53 PM, Todd Brunhoff wrote:
> Chris,
> winpdb is ok, although it is only a graphic debugger, not an ide, emphasis
> on the 'd'.
yup -- I mentioned, that as you seem to like NB -- and I know I try to
use the same editor for eveything.
But if you want a nice full-on IDE
Hi folks,
I'm working on a package that will contain a bunch of cython
extensions, all of which need to link against a pile of C++ code. What
I think I need to do is build that C++ as a dynamic library, so I can
link everything against it.
It would be nice if I could leverage distutils to build t
On Thu, Sep 20, 2012 at 2:48 PM, Nathaniel Smith wrote:
> because a += b
> really should be the same as a = a + b.
I don't think that's the case - the inplace operator should be (and
are) more than syntactic sugar -- they have a different meaning and
use (in fact, I think they should't work at al
On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith wrote:
> You're right of course. What I meant is that
> a += b
> should produce the same result as
> a[...] = a + b
>
> If we change the casting rule for the first one but not the second, though,
> then these will produce different results if
On Sat, Sep 22, 2012 at 1:00 PM, Sebastian Haase wrote:
> Oh,
> is this actually documented - I knew that np.array would (by default)
> only create copies as need ... but I never knew it would - if all fits
> - even just return the original Python-object...
was that a typo? is is "asarray" that r
On Tue, Sep 25, 2012 at 2:31 AM, Andreas Hilboll wrote:
> I commonly have to deal with legacy ASCII files, which don't have a
> constant number of columns. The standard is 10 values per row, but
> sometimes, there are less columns. loadtxt doesn't support this, and in
> genfromtext, the rows which
On Tue, Sep 25, 2012 at 4:31 AM, Sturla Molden wrote:
> Also, instead of writing a linked list, consider collections.deque.
> A deque is by definition a double-ended queue. It is just waste of time
> to implement a deque (double-ended queue) and hope it will perform
> better than Python's standard
Paul,
Nice to see someone working on these issues, but:
I'm not sure the problem you are trying to solve -- accumulating in a
list is pretty efficient anyway -- not a whole lot overhead.
But if you do want to improve that, it may be better to change the
accumulating method, rather than doing the
On Fri, Sep 28, 2012 at 3:11 PM, Charles R Harris
wrote:
> If the behaviour is not specified and tested, there is no guarantee that it
> will continue.
This is an open-source project - there is no guarantee of ANYTHING.
But that being said, the specification and testing of numpy is quite
weak --
On Sat, Sep 29, 2012 at 2:16 AM, Gael Varoquaux
wrote:
> Next time I see you, I owe you a beer for making you cross :).
If I curse at you, will I get a beer too?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
76
On Wed, Oct 3, 2012 at 9:05 AM, Paul Anton Letnes
wrote:
>> I'm not sure the problem you are trying to solve -- accumulating in a
>> list is pretty efficient anyway -- not a whole lot overhead.
>
> Oh, there's significant overhead, since we're not talking of a list - we're
> talking of a list-of
On Mon, Oct 29, 2012 at 1:37 PM, David Warde-Farley
wrote:
> This is something of a hack.
but a cool one...
> Like Pauli said, it's probably worthwhile to consider HDF5.
But HDF5 is a big dependency -- it can be a pain to build. IT's great
for what it does well, and great for interchanging dat
On Wed, Nov 7, 2012 at 11:41 AM, Neal Becker wrote:
> Would you expect numexpr without MKL to give a significant boost?
It can, depending on the use case:
-- It can remove a lot of uneccessary temporary creation.
-- IIUC, it works on blocks of data at a time, and thus can keep
things in cache m
On Thu, Nov 8, 2012 at 2:22 AM, Francesc Alted wrote:
>> -- It can remove a lot of uneccessary temporary creation.
> Well, the temporaries are still created, but the thing is that, by
> working with small blocks at a time, these temporaries fit in CPU cache,
> preventing copies into main memor
On 4/4/11 9:03 PM, josef.p...@gmail.com wrote:
> On Mon, Apr 4, 2011 at 11:42 PM, Charles R Harris
>>>File "/sw/lib/python2.4/site-packages/numpy/lib/_datasource.py",
>>> line 477, in open
>>> return _file_openers[ext](found, mode=mode)
>>> IOError: invalid mode: Ub
>>>
>>
>> Guess that w
> On 5/28/2011 3:40 PM, Robert wrote:
>> (myarray in mylist) turns into mylist.__contains__(myarray).
>> Only the list object is ever checked for this method. There is no
>> paired method myarray.__rcontains__(mylist) so there is nothing that
>> numpy can override to make this operation do anything
On 6/2/11 12:57 PM, Robert Kern wrote:
>>> Anyhow, years and months are simple enough.
>>
>> no, they are not -- they are fundamentally different than hours, days, etc.
>
> That doesn't make them *difficult*.
I won't comment on how difficult it is -- I'm not writing the code. My
core point is tha
On 6/7/11 4:53 PM, Pierre GM wrote:
> Anyhow, each time yo
> read 'frequency' in scikits.timeseries, think 'unit'.
or maybe "precision" -- when I think if unit, I think of something that
can be represented as a floating point value -- but here, with integers,
it's the precision that can be repr
On 6/27/11 9:53 AM, Charles R Harris wrote:
> Some discussion of disk storage might also help. I don't see how the
> rules can be enforced if two files are used, one for the mask and
> another for the data, but that may just be something we need to live with.
It seems it wouldn't be too big deal
On 7/3/11 9:03 PM, Joe Harrington wrote:
> Christopher Barker, Ph.D. wrote
>> quick note on this: I like the "FALSE == good" way, because:
>
> So, you like to have multiple different kinds of masked, but I need
> multiple good values for counts.
fair enough, maybe there isn't a consensus about wha
On 7/6/11 11:57 AM, Mark Wiebe wrote:
> On Wed, Jul 6, 2011 at 1:25 PM, Christopher Barker
> Is this really true? if you use a bitpattern for IGNORE, haven't you
> just lost the ability to get the original value back if you want to stop
> ignoring it? Maybe that's not inherent to what
Hi folks,
I'm trying to build numpy HEAD on Windows in preparation for the SciPy
sprints tomorrow. I've never built numpy on Windows, and I'm new to git,
so I could be doing any number of things wrong.
I think I have the latest code:
C:\Documents and Settings\Chris\My Documents\SciPy\numpy_git
On 7/14/2011 8:04 PM, Christoph Gohlke wrote:
A patch for the build issues is attached. Remove the build directory
before rebuilding.
Christoph,
I had other issues (I think in one case, a *.c file was not getting
re-built from the *.c.src file. But anyway, at the end the patch appears
to wor
On 8/10/2011 1:01 PM, Anne Archibald wrote:
There was also some work on a semi-mutable array type that allowed
appending along one axis, then 'freezing' to yield a normal numpy
array (unfortunately I'm not sure how to find it in the mailing list
archives).
That was me, and here is the thread --
aarrgg!
I cleaned up the doc string a bit, but didn't save before sending --
here it is again, Sorry about that.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seat
On 10/8/11 7:45 AM, Chao YUE wrote:
> I want to change some variable values in a series of NetCDF file. Did
> anybody else did this before using python?
I like the netcdf4 package -- very powerful and a pretty nice
numpy-compatible API:
http://code.google.com/p/netcdf4-python/
Check out the "exa
On 10/29/11 2:48 PM, Ralf Gommers wrote:
> That's true, but I am hoping that the difference between - say:
>
> a[0:2] = np.NA
>
> and
>
> a.mask[0:2] = False
>
> would be easy enough to imagine.
>
>
> It is in this case. I agree the explicit ``a.mask`` is clearer.
Interesting
On 10/29/11 2:59 PM, Charles R Harris wrote:
> I'm much opposed to ripping the current code out. It isn't like it is
> (known to be) buggy, nor has anyone made the case that it isn't a basis
> on which build other options. It also smacks of gratuitous violence
> committed by someone yet to make a
On 12/1/2011 9:15 AM, Derek Homeier wrote:
np.array((2, 12,0.001+2j), dtype='|S8')
> array(['2', '12', '(0.001+2'], dtype='|S8')
>
> - notice the last value is only truncated because it had first been converted
> into
> a "standard" complex representation, so maybe the problem is already in
On 12/6/2011 8:32 AM, Roger Binns wrote:
>> I think this cannot be helped --- it does not make sense to explain
>> basic Numpy concepts in every docstring, especially `axis` and `shape`
>> are very common.
>
> They don't need to be explained on the page, but instead link to a page
> that does expla
On 12/6/2011 9:54 AM, K.-Michael Aye wrote:
> I have a function f(x,y).
>
> I would like to calculate it at x = arange(20,101,20) and y = arange(2,30,2)
>
> How do I store that in a multi-dimensional array and preserve the grid
> points where I did the calculation
In [5]: X, Y = np.meshgrid(range
NOTE:
Let's keep this on the list.
On Tue, Dec 13, 2011 at 9:19 AM, denis wrote:
> Chris,
> unified, consistent save / load is a nice goal
>
> 1) header lines with date, pwd etc.: "where'd this come from ?"
>
># (5, 5) svm.py bz/py/ml/svm 2011-12-13 Dec 11:56 -- automatic
># 80.6 %
On Tue, Dec 13, 2011 at 11:29 AM, Bruce Southey wrote:
> **
> Reading data is hard and writing code that suits the diversity in the
> Numerical Python community is even harder!
>
>
yup
Both loadtxt and genfromtxt functions (other functions are perhaps less
> important) perhaps need an upgrade to
On Tue, Dec 13, 2011 at 1:21 PM, Ralf Gommers
wrote:
>
> genfromtxt sure looks close for an API
>>
>
> This I don't agree with. It has a huge amount of keywords that just
> confuse or intimidate a beginning user. There should be a dead simple
> interface, even the loadtxt API is on the heavy side.
On Wed, Dec 14, 2011 at 11:36 AM, Benjamin Root wrote:
>>> well, yes, though it does do a lot -- do you have a smpler one in mind?
>>>
>> Just looking at what I normally wouldn't need for simple data files and/or
>> what a beginning user won't understand at once, the `unpack` and `ndmin`
>> keywor
I suspect your getting a bit tangled up in the multiple binaries of
Python on the Mac.
On the python.org site there are two binaries:
32bit, PPC_Intel, OS-X 10.3.9 and above.
32 and 64 bit, Intel only, OS-X 10.6 and above.
You need to make sure that you get a matplotlib build for the python
bui
sorry to be a proselytizer, but this would be trivial with Cython:
http://cython.org/
-Chris
On Tue, Dec 27, 2011 at 1:52 AM, Åke Kullenberg
wrote:
> After diving deeper in the docs I found the PyTuple_New alternative to
> building tuples instead of Py_BuildValue. It seems to work fine.
>
> Bu
On Thu, Dec 29, 2011 at 1:36 PM, Ralf Gommers
wrote:
> First thought: very useful, but probably not GSOC topics by themselves.
Documentation is specificsly excluded from GSoC (at least it was a
couple years ago when I last was involved)
Not sure about testing, but I'd guess it can't be a project
On Fri, Dec 30, 2011 at 9:43 PM, Jaidev Deshpande
wrote:
>> Documentation is specificsly excluded from GSoC (at least it was a
>> couple years ago when I last was involved)
>
> Documentation wasn't excluded last year from GSoC, there were quite a
> few projects that required a lot of documentation
Here's a thought:
Too bad numpy doesn't have a 24 bit integer, but you could tack a 0
on, making your image 32 bit, then use histogram2d to count the
colors.
something like (untested):
# create the 32 bit image
32bit_im = np.zeros((w, h), dtype = np.uint32)
view = 32bit_im.view(dtype = np.uint8)
2012/1/22 Ondřej Čertík :
> If I have time, I'll try to provide an equivalent Fortran version too,
> for comparison.
>
> Ondrej
here is a Cython example:
http://wiki.cython.org/examples/mandelbrot
I haven't looked to see if it's the same algorithm, but it may be
instructive, none the less.
-Chr
On Wed, Jan 18, 2012 at 1:26 AM, wrote:
> Your ideas are very helpfull and the code is very fast.
I'm curios -- a number of ideas were floated here -- what did you end up using?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-
HI folks,
Is there a way to get a view of a subset of a structured array? I know
that an arbitrary subset will not fit into the numpy "strides"offsets"
model, but some will, and it would be nice to have a view:
For example, here we have a stuctured array:
In [56]: a
Out[56]:
array([(1, 2.0, 3.0,
On Wed, Jan 25, 2012 at 11:33 AM, wrote:
> that's what I would try:
>
b = a.view(dtype=[('i', '>>> ('i2', '>>> b['fl']
> array([(2.0, 3.0), (8.0, 9.0), (123.41, 7.0), (10.0, 11.0),
> (14.0, 15.0)],
> dtype=[('f1', '>>> b['fl'][2]= (200, 500)
a
> array([(1, 2.0, 3.
On Fri, Jan 27, 2012 at 1:29 PM, Robert Kern wrote:
> Well, if you really need to do this in more than one place, define a
> utility function and call it a day.
>
> def should_not_plot(x):
> if x is None:
> return True
> elif isinstance(x, np.ndarray):
> return x.size == 0
>
On Tue, Jan 31, 2012 at 6:33 AM, Neal Becker wrote:
> The reason it surprised me, is that python 'all' doesn't behave as numpy 'all'
> in this respect - and using ipython, I didn't even notice that 'all' was
> numpy.all rather than standard python all.
"namespaces are one honking great idea"
--
On Tue, Jan 31, 2012 at 6:14 AM, Malcolm Reynolds
wrote:
> Not exactly an answer to your question, but I can highly recommend
> using Boost.python, PyUblas and Ublas for your C++ vectors and
> matrices. It gives you a really good interface on the C++ side to
> numpy arrays and matrices, which can
On Sun, Feb 5, 2012 at 10:41 AM, Paolo wrote:
> I solved using 'rb' instead of 'r' option in the open file task.
>
> that would do it, if it's binary data, but you might as well so it
"right":
> matrix="".join(f.readlines())
>
> readlines is giving you a list of the data, as separated by newlin
On Wed, Feb 8, 2012 at 10:18 AM, Debashish Saha wrote:
> how to insert some specific delay in python programming using numpy command.
do you mean a time delay? If so -- numpy doesn't (and shouldn't) have
such a thing.
however, the time module has time.sleep()
whether it's a good idea to use tha
Andrea,
> Basically I have a set of x, y data (around 1,000 elements each) and I
> want to create 2 parallel "curves" (offset curves) to the original
> one; "parallel" means curves which are displaced from the base curve
> by a constant offset, either positive or negative, in the direction of
> th
On Mon, Feb 13, 2012 at 1:01 AM, Niki Spahiev wrote:
> You can get polygon buffer from http://angusj.com/delphi/clipper.php and
> make cython interface to it.
This should be built into GEOS as well, and the shapely package
provides a python wrapper already.
-Chris
> HTH
>
> Niki
>
> _
On Mon, Feb 13, 2012 at 6:19 PM, Mark Wiebe wrote:
> It might be nice to turn the matrix class into a short class hierarchy,
am I confused, or did a thread get mixed in? This seems to be a
numpy/scipy thing, not a Python3 thing. Or is there some support in
Python itself required for this to be pr
1 - 100 of 595 matches
Mail list logo