On Mon, Mar 2, 2009 at 00:50, Stephen Simmons wrote:
> Hi,
>
> Can anyone help me out with a simple way to vectorize this loop?
>
> # idx and vals are arrays with indexes and values used to update array data
> # data = numpy.ndarray(shape=(100,100,100,100), dtype='f4')
> flattened = data.ravel()
>
Hi,
Can anyone help me out with a simple way to vectorize this loop?
# idx and vals are arrays with indexes and values used to update array data
# data = numpy.ndarray(shape=(100,100,100,100), dtype='f4')
flattened = data.ravel()
for i in range(len(vals)):
flattened[idx[i]]+=vals[i]
Many th
Hi Stéfan,
>>> http://github.com/stefanv/bilateral.git
>>
>> Cool! Does this, out of curiosity, break things for you? (Or Nadav?)
>
> I wish I had some way to test. Do you maybe have a short example that
> I can convert to a test?
Here's my test case for basic working-ness (e.g. non exception-
Gideon Simpson wrote:
> So I have some data sets of about 16 floating point numbers stored
> in text files. I find that loadtxt is rather slow. Is this to be
> expected? Would it be faster if it were loading binary data?
Depending on the format you may be able to use numpy.fromfile, whi
On Sun, Mar 1, 2009 at 11:29 AM, Michael Gilbert
wrote:
> On Sun, 1 Mar 2009 16:12:14 -0500 Gideon Simpson wrote:
>
>> So I have some data sets of about 16 floating point numbers stored
>> in text files. I find that loadtxt is rather slow. Is this to be
>> expected? Would it be faster if it
On Sun, 1 Mar 2009 14:29:54 -0500 Michael Gilbert wrote:
> i have rewritten loadtxt to be smarter about allocating memory, but
> it is slower overall and doesn't support all of the original
> arguments/options (yet).
i had meant to say that my version is slower for smaller data sets (when
you ar
On Sun, 1 Mar 2009 16:12:14 -0500 Gideon Simpson wrote:
> So I have some data sets of about 16 floating point numbers stored
> in text files. I find that loadtxt is rather slow. Is this to be
> expected? Would it be faster if it were loading binary data?
i have run into this as well.
2009/3/2 Zachary Pincus :
>> http://github.com/stefanv/bilateral.git
>
> Cool! Does this, out of curiosity, break things for you? (Or Nadav?)
I wish I had some way to test. Do you maybe have a short example that
I can convert to a test?
> I'm all for it. I've got a few other bits lying around th
> 2009/3/1 Zachary Pincus :
>> Dag, the cython person who seems to deal with the numpy stuff, had
>> this to say:
>>> - cimport and import are different things; you need both.
>>> - The "dimensions" field is in Cython renamed "shape" to be closer
>>> to the Python interface. This is done in Cython/
Hey Zach,
2009/3/1 Zachary Pincus :
> Dag, the cython person who seems to deal with the numpy stuff, had
> this to say:
>> - cimport and import are different things; you need both.
>> - The "dimensions" field is in Cython renamed "shape" to be closer
>> to the Python interface. This is done in Cyt
Zach,
I put the source my Cython generated here:
http://mentat.za.net/refer/bilateral_base.c
Can you try to compile that?
Cheers
Stéfan
2009/3/1 Zachary Pincus :
>>> Well, the latest cython doesn't help -- both errors still appear as
>>> below. (Also, the latest cython can't run the numpy test
Hi guys,
Dag, the cython person who seems to deal with the numpy stuff, had
this to say:
> - cimport and import are different things; you need both.
> - The "dimensions" field is in Cython renamed "shape" to be closer
> to the Python interface. This is done in Cython/Includes/numpy.pxd
After
On Sun, Mar 1, 2009 at 15:12, Gideon Simpson wrote:
> So I have some data sets of about 16 floating point numbers stored
> in text files. I find that loadtxt is rather slow. Is this to be
> expected?
Probably. You don't say exactly what you mean by "slow", so it's
difficult to tell. But it
So I have some data sets of about 16 floating point numbers stored
in text files. I find that loadtxt is rather slow. Is this to be
expected? Would it be faster if it were loading binary data?
-gideon
___
Numpy-discussion mailing list
Numpy-d
1. "dimensions" is a field in the C struct, that describes the array object.
2. Is there a chance that the header file numpy/arrayobject.h belongs to a
different numpy version that you run?
I am not very experienced with cython (I suppose that Stefan has some
experience).
As you said, probabl
>> Well, the latest cython doesn't help -- both errors still appear as
>> below. (Also, the latest cython can't run the numpy tests either.)
>> I'm
>> befuddled.
>
> That's pretty weird. Did you remove the .so that was build as well as
> any source files, before doing build_ext with the latest C
On Sun, Mar 1, 2009 at 10:37 AM, Kevin Dunn wrote:
> Hi everyone,
>
> I'm subclassing Numpy's MaskedArray to create a data class that handles
> missing data, but adds some extra info I need to carrry around. However I've
> been having problems keeping this extra info attached to the subclass
> ins
Hi everyone,
I'm subclassing Numpy's MaskedArray to create a data class that handles
missing data, but adds some extra info I need to carrry around. However I've
been having problems keeping this extra info attached to the subclass
instances after performing operations on them.
The bare-bones scr
On Sun, Mar 1, 2009 at 12:26 PM, Bruce Southey wrote:
> Or just a Vista bug.
>
> A possible option could be using wine64
> (http://wiki.winehq.org/Wine64) . But is probably more work and if
> even if it did work it may not be informative.
Win64 is not yet usable AFAIK - it was only a few weeks /
19 matches
Mail list logo