Travis Oliphant wrote:
>
> I'm attaching my latest extended buffer-protocol PEP that is trying to
> get the array interface into Python. Basically, it is a translation of
> the numpy header files into something as simple as possible that can
> still be used to describe a complicated block of mem
Travis Oliphant wrote:
> Neal Becker wrote:
>> Travis Oliphant wrote:
>>
>>
>>> I'm attaching my latest extended buffer-protocol PEP that is trying to
>>> get the array interface into Python. Basically, it is a translation of
>>> the nu
I believe we are converging, and this is pretty much the same design as I
advocated. It is similar to boost::ublas.
Storage is one concept.
Interpretation of the storage is another concept.
Numpy is a combination of a storage and interpretation.
Storage could be dense or sparse. Allocated in
Travis Oliphant wrote:
> Neal Becker wrote:
>> I believe we are converging, and this is pretty much the same design as I
>> advocated. It is similar to boost::ublas.
>>
> I'm grateful to hear that. It is nice when ideas come from several
> different co
I have never used matlab, but a lot of my colleagues do. Can anyone give me
some good references that I could show them to explain the advantages of
python over matlab?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.o
I'm interested in this comparison (not in starting yet another flame fest).
I actually know nothing about matlab, but almost all my peers use it. One
of the things I recall reading on this subject is that matlab doesn't
support OO style programming. I happened to look on the matlab vendor's
webs
Sturla Molden wrote:
> On 4/26/2007 2:19 PM, Steve Lianoglou wrote:
>
>>> Beside proper programing paradigm Python easily scales to large-
>>> scale number crunching: You can run large-matrices calculations
>>> with about 1/2 to 1/4 of memory consumption comparing to Matlab.
>>
>> Is that really
Anyone know where to find usable rpms from scipy on centos4.4?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Travis E. Oliphant wrote:
>
>> nd to copy hundres of MB around unnecessarily.
>>
>> I think it is a real shame that boost currently doesn't properly support
>> numpy out of the box, although numpy has long obsoleted both numarray and
>> Numeric (which is both buggy and completely unsupported). Al
Travis E. Oliphant wrote:
>
>> nd to copy hundres of MB around unnecessarily.
>>
>> I think it is a real shame that boost currently doesn't properly support
>> numpy out of the box, although numpy has long obsoleted both numarray and
>> Numeric (which is both buggy and completely unsupported). Al
I'm thinking (again) about using numpy for signal processing applications.
One issue is that there are more data types that are commonly used in
signal processing that are not available in numpy (or python).
Specifically, it is frequently required to convert floating point
algorithms into integer
Charles R Harris wrote:
> On 10/5/07, Neal Becker <[EMAIL PROTECTED]> wrote:
>>
>> I'm thinking (again) about using numpy for signal processing
>> applications. One issue is that there are more data types that are
>> commonly used in signal processing that ar
Charles R Harris wrote:
> On 10/5/07, Neal Becker <[EMAIL PROTECTED]> wrote:
>>
>> Charles R Harris wrote:
>>
>> > On 10/5/07, Neal Becker <[EMAIL PROTECTED]> wrote:
>> >>
>> >> I'm thinking (again) about using numpy for sign
Suppose I have a function F(), which is defined for 1-dim arguments. If the
user passes an n>1 dim array, I want to apply F to each 1-dim view.
For example, for a 2-d array, apply F to each row and return a 2-d result.
For a 3-d array, select each 2-d subarray and see above. Return 3-d result.
I'm interested in experimenting with adding complex data type. I
have "Guide to Numpy". I'm looking at section 15.3. It looks like the
first thing is a PyArray_Desc. There doesn't seem to be much info on what
needs to go in this. Does anyone have any examples I could look at?
Matthieu Brucher wrote:
> 2007/10/12, Alan G Isaac <[EMAIL PROTECTED]>:
>>
>> On Fri, 12 Oct 2007, Matthieu Brucher apparently wrote:
>> > I'm trying to understand (but perhaps everything is in the
>> > numpy book in which case I'd rather buy the book
>> > immediately) how to use the PyArray_FromA
Charles R Harris wrote:
> On 11/1/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
>>
>> Ah, ok. Thanks. That does look like a good example.
>> I've heard of it, but never looked too closely for some reason. I
>> guess I always thought of it as the library that pioneered expression
>> templates but t
Sturla Molden wrote:
> Den 19.02.2012 01:12, skrev Nathaniel Smith:
>>
>> I don't oppose it, but I admit I'm not really clear on what the
>> supposed advantages would be. Everyone seems to agree that
>>-- Only a carefully-chosen subset of C++ features should be used
>>-- But this subset wo
Nathaniel Smith wrote:
> On Sun, Feb 19, 2012 at 9:16 AM, David Cournapeau wrote:
>> On Sun, Feb 19, 2012 at 8:08 AM, Mark Wiebe wrote:
>>> Is there a specific
>>> target platform/compiler combination you're thinking of where we can do
>>> tests on this? I don't believe the compile times are as
Sturla Molden wrote:
>
> Den 18. feb. 2012 kl. 01:58 skrev Charles R Harris
> :
>
>>
>>
>> On Fri, Feb 17, 2012 at 4:44 PM, David Cournapeau wrote:
>> I don't think c++ has any significant advantage over c for high performance
>> libraries. I am not convinced by the number of people argument
Charles R Harris wrote:
> On Fri, Feb 17, 2012 at 12:09 PM, Benjamin Root wrote:
>
>>
>>
>> On Fri, Feb 17, 2012 at 1:00 PM, Christopher Jordan-Squire <
>> cjord...@uw.edu> wrote:
>>
>>> On Fri, Feb 17, 2012 at 10:21 AM, Mark Wiebe wrote:
>>> > On Fri, Feb 17, 2012 at 11:52 AM, Eric Firing
>>>
What is the correct way to find the installed location of arrayobject.h?
On fedora, I had been using:
(via scons):
import distutils.sysconfig
PYTHONINC = distutils.sysconfig.get_python_inc()
PYTHONLIB = distutils.sysconfig.get_python_lib(1)
NUMPYINC = PYTHONLIB + '/numpy/core/include'
But on ub
It's great advice to say
avoid using new
instead rely on scope and classes such as std::vector.
I just want to point out, that sometimes objects must outlive scope.
For those cases, std::shared_ptr can be helpful.
___
NumPy-Discussion mailing list
N
Is mkl only used for linear algebra? Will it speed up e.g., elementwise
transendental functions?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Pauli Virtanen wrote:
> 23.02.2012 20:44, Francesc Alted kirjoitti:
>> On Feb 23, 2012, at 1:33 PM, Neal Becker wrote:
>>
>>> Is mkl only used for linear algebra? Will it speed up e.g., elementwise
>>> transendental functions?
>>
>> Yes, MKL comes
Francesc Alted wrote:
> On Feb 23, 2012, at 2:19 PM, Neal Becker wrote:
>
>> Pauli Virtanen wrote:
>>
>>> 23.02.2012 20:44, Francesc Alted kirjoitti:
>>>> On Feb 23, 2012, at 1:33 PM, Neal Becker wrote:
>>>>
>>>>> Is mkl
Keith Goodman wrote:
> Is this a reasonable (and fast) way to create a bool array in cython?
>
> def makebool():
> cdef:
> int n = 2
> np.npy_intp *dims = [n]
> np.ndarray[np.uint8_t, ndim=1] a
> a = PyArray_EMPTY(1, dims, NPY_UINT8, 0)
>
Charles R Harris wrote:
> On Tue, Feb 28, 2012 at 12:05 PM, John Hunter wrote:
>
>> On Sat, Feb 18, 2012 at 5:09 PM, David Cournapeau wrote:
>>
>>>
>>> There are better languages than C++ that has most of the technical
>>>
>>> benefits stated in this discussion (rust and D being the most
>>> "ob
What is a simple, efficient way to determine if all elements in an array (in my
case, 1D) are equal? How about close?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Keith Goodman wrote:
> On Mon, Mar 5, 2012 at 11:14 AM, Neal Becker wrote:
>> What is a simple, efficient way to determine if all elements in an array (in
>> my case, 1D) are equal? How about close?
>
> For the exactly equal case, how about:
>
> I[1] a = np.array([1
Keith Goodman wrote:
> On Mon, Mar 5, 2012 at 11:52 AM, Benjamin Root wrote:
>> Another issue to watch out for is if the array is empty. Technically
>> speaking, that should be True, but some of the solutions offered so far
>> would fail in this case.
>
> Good point.
>
> For fun, here's the sp
I'm wondering what is the use for the ignored data feature?
I can use:
A[valid_A_indexes] = whatever
to process only the 'non-ignored' portions of A. So at least some simple cases
of ignored data are already supported without introducing a new type.
OTOH:
w = A[valid_A_indexes]
will copy A'
Charles R Harris wrote:
> On Wed, Mar 7, 2012 at 1:05 PM, Neal Becker wrote:
>
>> I'm wondering what is the use for the ignored data feature?
>>
>> I can use:
>>
>> A[valid_A_indexes] = whatever
>>
>> to process only the 'non-ignor
I see unique does not take an axis arg.
Suggested way to apply unique to each column of a 2d array?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I have an array of object.
How can I apply attribute access to each element?
I want to do, for example,
np.all (u.some_attribute == 0) for all elements in u?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/li
Adam Hughes wrote:
> If you are storing objects, then can't you store them in a list and just do:
>
> for obj in objectlist:
> obj.attribute = value
>
> Or am I misunderstanding?
>
It's multi-dimensional, and I wanted to avoid writing explicit loops.
_
Ken Watford wrote:
> On Thu, Apr 5, 2012 at 11:57 AM, Olivier Delalleau wrote:
>> Le 5 avril 2012 11:45, Neal Becker a écrit :
>>
>> You can do:
>>
>> f = numpy.frompyfunc(lambda x: x.some_attribute == 0, 1, 1)
>>
>> Then
>> f(array_
Along the lines of my question about apply getitem to each element...
If I try to use nditer, I seem to run into trouble:
for d in np.nditer (y, ['refs_ok'],['readwrite']):
: y[...].w = 2
:
---
AttributeEr
Nathaniel Smith wrote:
> On Sat, Apr 28, 2012 at 7:38 AM, Richard Hattersley
> wrote:
>> So, assuming numpy.ndarray became a strict subclass of some new masked
>> array, it looks plausible that adding just a few checks to numpy.ndarray to
>> exclude the masked superclass would prevent much downst
I am quite interested in a fixed point data type. I had produced a working
model some time ago.
Maybe I can use some of these new efforts to provide good examples as a guide.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sci
Will copying slices always work correctly w/r to aliasing?
That is, will:
u[a:b] = u[c:d]
always work (assuming the ranges of a:b, d:d are equal, or course)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/lis
I think it's unfortunate that functions like logical_or are limited to binary.
As a workaround, I've been using this:
def apply_binary (func, *args):
if len (args) == 1:
return args[0]
elif len (args) == 2:
return func (*args)
else:
return func (
ap
Would lazy eval be able to eliminate temps in doing operations such as:
np.sum (u != 23)?
That is, now ops involving selecting elements of matrixes are often performed
by
first constructing temp matrixes, and the operating on them.
___
NumPy-Discussi
In [3]: u = np.arange(10)
In [4]: u
Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [5]: u[-2:]
Out[5]: array([8, 9])
In [6]: u[-2:2]
Out[6]: array([], dtype=int64)
I would argue for consistency it would be desirable for this to return
[8, 9, 0, 1]
__
Robert Kern wrote:
> On Thu, Jun 7, 2012 at 7:55 PM, Neal Becker wrote:
>> In [3]: u = np.arange(10)
>>
>> In [4]: u
>> Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>
>> In [5]: u[-2:]
>> Out[5]: array([8, 9])
>>
>> In [6]: u[-2:2]
Maybe I'm being slow, but is there any convenient function to calculate,
for 2 vectors:
\sum_i \sum_j x_i y_j
(I had a matrix once, but it vanished without a trace)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mail
Robert Kern wrote:
> On Wed, Jun 20, 2012 at 3:58 PM, Neal Becker wrote:
>> Maybe I'm being slow, but is there any convenient function to calculate,
>> for 2 vectors:
>>
>> \sum_i \sum_j x_i y_j
>>
>> (I had a matrix once, but it vanished without a tra
I've been bitten several times by this.
logical_or (a, b, c)
is silently accepted when I really meant
logical_or (logical_or (a, b), c)
because the logic functions are binary, where I expected them to be m-ary.
Dunno if anything can be done about it.
Sure would like it if they were m-ary and
Perhaps of some interest here:
http://lwn.net/Articles/507756/rss
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
This looks interesting:
http://code.google.com/p/blaze-lib/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I think this should be simple, but I'm drawing a blank
I have 2 2d matrixes
Matrix A has indexes (i, symbol)
Matrix B has indexes (state, symbol)
I combined them into a 3d matrix:
C = A[:,newaxis,:] + B[newaxis,:,:]
where C has indexes (i, state, symbol)
That works fine.
Now suppose I want to
In [19]: u = np.arange (10)
In [20]: v = np.arange (10)
In [21]: u[v] = u
In [22]: u[v] = np.arange(11)
silence...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sounds like I'm not the only one surprised then:
http://projects.scipy.org/numpy/ticket/2220
Matthew Brett wrote:
> Hi,
>
> On Mon, Oct 1, 2012 at 9:04 AM, Pierre Haessig
> wrote:
>> Hi,
>>
>> Le 28/09/2012 21:02, Neal Becker a écrit :
>>> In
I find it annoying that in casual use, if I print an array, that form can't be
directly used as subsequent input (or can it?).
What do others do about this? When I say casual, what I mean is, I write some
long-running task and at the end, print some small array. Now I decide I'd
like
to cut/
I'm trying to convert some matlab code. I see this:
b(1)=[];
AFAICT, this removes the first element of the array, shifting the others.
What is the preferred numpy equivalent?
I'm not sure if
b[:] = b[1:]
is safe or not
___
NumPy-Discussion mailing
I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
I got an idea that instead of building all of numpy/scipy and all of my custom
modules against these libraries, I could simply use:
LD_PRELOAD=/opt/amdlibm-3.0.2/lib/dynamic/libamdlibm.so:/opt/acml5.2.0/gfortran64/lib/l
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
>> I'm trying to do a bit of benchmarking to see if amd libm/acml will help me.
>>
>> I got an idea that instead of building all of numpy/scipy and all of my
>> custom modules against
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
>> David Cournapeau wrote:
>>
>>> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
>>>> I'm trying to do a bit of benchmarking to see if amd libm/acml will help
>>
David Cournapeau wrote:
> On Wed, Nov 7, 2012 at 1:56 PM, Neal Becker wrote:
>> David Cournapeau wrote:
>>
>>> On Wed, Nov 7, 2012 at 12:35 PM, Neal Becker wrote:
>>>> I'm trying to do a bit of benchmarking to see if amd libm/acml will help
>>
Would you expect numexpr without MKL to give a significant boost?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I'm interested in trying numexpr, but have a question (not sure where's the
best
forum to ask).
The examples I see use
ne.evaluate ("some string...")
When used within a loop, I would expect the compilation from the string form to
add significant overhead. I would have thought a pre-compiled
I don't understand why the plot of the spline continues on a negative slope at
the end, but the plot of the integral of it flattens.
-
import numpy as np
import matplotlib.pyplot as plt
ibo = np.array ((12, 14, 16, 18, 20, 22, 24, 26, 28, 29,
Pauli Virtanen wrote:
> 20.11.2012 21:11, Neal Becker kirjoitti:
>> import numpy as np
>> import matplotlib.pyplot as plt
>>
>> ibo = np.array ((12, 14, 16, 18, 20, 22, 24, 26, 28, 29, 29.8, 30.2))
>> gain_deriv = np.array ((0, 0, 0, 0, 0, 0, .2, .4,
I think it's a misfeature that a floating point is silently accepted as an
index. I would prefer a warning for:
bins = np.arange (...)
for b in bins:
...
w[b] = blah
when I meant:
for ib,b in enumerate (bins):
w[ib] = blah
___
NumPy-Discussion
I'd be happy with disallowing floating point index at all. I would think it
was
almost always a mistake.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Are release notes available?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
np.unwrap was too slow, so I rolled by own (in c++).
I wanted to be able to handle the case of
unwrap (arg (x1) + arg (x2))
Here, phase can change by more than 2pi.
I came up with the following algorithm, any thoughts?
In the following, y is normally set to pi.
o points to output
i points to i
Nadav Horesh wrote:
> There is an unwrap function in numpy. Doesn't it work for you?
>
Like I had said, np.unwrap was too slow. Profiling showed it eating up an
absurd proportion of time. My c++ code was much better (although still
surprisingly slow).
___
(u) + 1j * np.sin (u)
plot (arg(v))
plot (arg(v) + arg (v))
plot (unwrap (arg (v)))
plot (unwrap (arg (v) + arg (v)))
---
Pierre Haessig wrote:
> Hi Neal,
>
> Le 11/01/2013 16:40, Neal Becker a écrit :
>> I wanted to be able to handle the case of
>>
Any suggestion how to take a 2d complex array and find the set of points that
are unique within some tolerance? (My preferred metric here would be Euclidean
distance)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/m
I have an array to be used for indexing. It is 2d, where the rows are all the
permutations of some numbers. So:
array([[-2, -2, -2],
[-2, -2, -1],
[-2, -2, 0],
[-2, -2, 1],
[-2, -2, 2],
...
[ 2, 1, 2],
[ 2, 2, -2],
[ 2, 2, -1],
[ 2
Is there a way to add '-march=native' flag to gcc for the build?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
In the following code
c = np.multiply (a, b.conj())
d = np.abs (np.sum (c, axis=0)/rows)
d2 = np.abs (np.tensordot (a, b.conj(), ((0,),(0,)))/rows)
print a.shape, b.shape, d.shape, d2.shape
The 1st compute steps, where I do multiply and then sum g
Bradley M. Froehle wrote:
> Hi Neal:
>
> The tensordot part:
> np.tensordot (a, b.conj(), ((0,),(0,))
>
> is returning a (13, 13) array whose [i, j]-th entry is sum( a[k, i] *
> b.conj()[k, j] for k in xrange(1004) ).
>
> -Brad
>
>
> The print statement outputs this:
>>
>> (1004, 13) (100
Henry Gomersall wrote:
> Some of you may be interested in the latest release of my FFTW bindings.
> It can now serve as a drop in replacement* for numpy.fft and
> scipy.fftpack.
>
> This means you can get most of the speed-up of FFTW with a one line code
> change or monkey patch existing librarie
I tried to save a vector as a csv, but it didn't work.
The vector is:
a[0,0]
array([-0.70710678-0.70710678j, 0.70710678+0.70710678j,
0.70710678-0.70710678j, 0.70710678+0.70710678j,
-0.70710678-0.70710678j, 0.70710678-0.70710678j,
-0.70710678+0.70710678j, -0.70710678+0.7071
Robert Kern wrote:
> On Wed, Feb 20, 2013 at 1:25 PM, Neal Becker wrote:
>> I tried to save a vector as a csv, but it didn't work.
>>
>> The vector is:
>> a[0,0]
>> array([-0.70710678-0.70710678j, 0.70710678+0.70710678j,
>> 0.7
Nathaniel Smith wrote:
> On Tue, Mar 12, 2013 at 9:25 PM, Nathaniel Smith wrote:
>> On Mon, Mar 11, 2013 at 9:46 AM, Robert Kern wrote:
>>> On Sun, Mar 10, 2013 at 6:12 PM, Siu Kwan Lam wrote:
My suggestion to overcome (1) and (2) is to allow the user to select
between the two impleme
I guess I talked to you about 100 years ago about sharing state between numpy
rng and code I have in c++ that wraps boost::random. So is there a C-api for
this RandomState object I could use to call from c++? Maybe I could do
something with that.
The c++ code could invoke via the python api,
Neal Becker wrote:
> I guess I talked to you about 100 years ago about sharing state between numpy
> rng and code I have in c++ that wraps boost::random. So is there a C-api for
> this RandomState object I could use to call from c++? Maybe I could do
> something with that.
>
Visiting http://docs.scipy.org/doc/numpy/reference/, as search for
as_strided
or
stride_tricks
shows nothing (useful).
For that matter, I don't see a reference to numpy.lib.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sc
The file is pickle saved on i386 and loaded on x86_64. It contains a numpy
array (amoungst other things).
On load it says:
RuntimeError: invalid signature
Is binary format not portable?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
Short-circuiting find would be nice. Right now, to 'find' something you first
make a bool array, then iterate over it. If all you want is the first index
where x[i] = e, not very efficient.
What I just described is a find with a '==' predicate. Not sure if it's
worthwhile to consider other p
Can I arrange to reinterpret an array of complex of length N as an array of
float of length 2N, and vice-versa? If so, how?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I'm wondering what are good choices for fast numpy array serialization?
mmap: fast, but I guess not self-describing?
hdf5: ?
pickle: self-describing, but maybe not fast?
others?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.sc
Neal Becker wrote:
> I'm wondering what are good choices for fast numpy array serialization?
>
> mmap: fast, but I guess not self-describing?
> hdf5: ?
> pickle: self-describing, but maybe not fast?
> others?
I think, in addition, that hdf5 is the only one that easily int
Maybe I'm being dense today, but I don't see how to iterate over arrays with
write access. You could read through iterators like:
fl = u.flat
>>> for item in fl:
... print item
but you can't do
for item in fl:
item = 10
(or, it won't do what you want).
Is there any way to do this?
___
I have a set of experiments that I want to plot. There will be many plots.
Each will show different test conditions.
Suppose I put each of the test conditions and results into a recarray. The
recarray could be:
arr = np.empty ((#experiments,), dtype=[('x0',int), ('x1',int), ('y0',int)]
wher
r e in u))
>
> (note: it is inefficient written this way though)
>
> -=- Olivier
>
> 2011/6/23 Neal Becker
>
>> I have a set of experiments that I want to plot. There will be many plots.
>> Each will show different test conditions.
>>
>> Suppose I put
josef.p...@gmail.com wrote:
> On Thu, Jun 23, 2011 at 8:20 AM, Neal Becker wrote:
>> Olivier Delalleau wrote:
>>
>>> What about :
>>> dict((k, [e for e in arr if (e['x0'], e['x1']) == k]) for k in cases)
>>> ?
>>
>> Not ba
I was pleasantly surprised to find that recarrays can have a recursive
structure, and can be defined using a nice syntax:
dtype=[('deltaf', float),('eq', bool),('filt',
[('pulse', 'a10'), ('alpha', float)])]
However, this I discovered just by guessing. It would be good to mention this
in the
Just 1 question before I look more closely. What is the cost to the non-MA
user
of this addition?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Just trying it out with 1.6:
np.datetime64('now')
Out[6]: 2011-07-01 00:00:00
Well the time now is 07:01am. Is this expected behaviour?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussi
I thought I'd try to speed up numpy on my fedora system by rebuilding the atlas
package so it would be tuned for my machine. But when I do:
rpmbuild -ba -D 'enable_native_atlas 1' atlas.spec
it fails with:
res/zgemvN_5000_100 : VARIATION EXCEEDS TOLERENCE, RERUN WITH HIGHER REPS.
A bit of go
Charles R Harris wrote:
> On Tue, Jul 5, 2011 at 7:45 AM, Neal Becker wrote:
>
>> I thought I'd try to speed up numpy on my fedora system by rebuilding the
>> atlas
>> package so it would be tuned for my machine. But when I do:
>>
>> rpmbuild -ba -D
Charles R Harris wrote:
> On Tue, Jul 5, 2011 at 8:37 AM, Charles R Harris
> wrote:
>
>>
>>
>> On Tue, Jul 5, 2011 at 8:13 AM, Neal Becker wrote:
>>
>>> Charles R Harris wrote:
>>>
>>> > On Tue, Jul 5, 2011 at 7:45 AM, Neal Becker
Christopher Barker wrote:
> Dag Sverre Seljebotn wrote:
>> Here's an HPC perspective...:
>
>> At least I feel that the transparency of NumPy is a huge part of its
>> current success. Many more than me spend half their time in C/Fortran
>> and half their time in Python.
>
> Absolutely -- and this
Warning: invalid value encountered in divide
No traceback. How can I get more info on this? Can this warning be converted
to an exception so I can get a trace?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
There'a a boatload of options for nditer. I need a simple explanation, maybe a
few simple examples. Is there anything that might help?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I've encountered something weird about numpy.void.
arr = np.empty ((len(results),), dtype=[('deltaf', float),
('quantize', [('int', int), ('frac',
int)])])
for i,r in enumerate (results):
arr[i] = (r[0]['deltaf'],
tuple(r[0]['quantize_mf'
1 - 100 of 463 matches
Mail list logo