Can you teach me how to used array api in C/C++?
1.How to get a data in co-ordinate i,j ,
example
a = array([[1,2,3],[4,5,6]]) how do i get the value of 5 in c/c++
or
2.How i sum all of data in arrays in c/c++
Best regards.
___
Numpy-discussion mailin
Can you teach me how to used array api in C/C++?
1.How to get a data in co-ordinate i,j ,
example
a = array([[1,2,3],[4,5,6]]) how do i get the value of 5 in c/c++
or
2.How i sum all of data in arrays in c/c++
___
Numpy-discussion mailing list
Numpy-di
Andrea Gavana wrote:
> I have tried the solutions proposed in the previous thread and it
> looks like Chris' one is the fastest for my purposes.
whoo hoo! What do I win? ;-)
> Splitting the reading process between 4 processes will require the
> exchange of 5-20 MB from the child processes to
Robert Kern wrote:
> On Tue, May 26, 2009 at 00:50, Christopher Barker
> wrote:
>> I assumed so, and I also assume you took a look at netcdf3, but since
>> it's been brought up here, I take it it didn't fit the bill?
> Lack of unsigned and 64-bit integers for the most part. But even if
> they we
2009/5/26 Charles سمير Doutriaux :
> Hi there,
>
> One of our users just found a bug in numpy that has to do with casting.
>
> Consider the attached example.
>
> The difference at the end should be 0 (zero) everywhere.
>
> But it's not by default.
>
> Casting the data to 'float64' at reading and a
On Tue, May 26, 2009 at 00:50, Christopher Barker wrote:
> Robert Kern wrote:
>> Yes. That's why I wrote the NPY format instead. I *did* do some due
>> diligence before I designed a new binary format.
>
> I assumed so, and I also assume you took a look at netcdf3, but since
> it's been brought up
Hi there,
One of our users just found a bug in numpy that has to do with casting.
Consider the attached example.
The difference at the end should be 0 (zero) everywhere.
But it's not by default.
Casting the data to 'float64' at reading and assiging to the arrays
works
Defining the arrays
Hi All,
I have tried the solutions proposed in the previous thread and it
looks like Chris' one is the fastest for my purposes. Now, I have a
question which is probably more conceptual than
implementation-related.
I started this little thread as my task is to read medium to
(relatively) big u
On May 25, 2009, at 10:59 PM, Joe Harrington wrote:
> Let's keep this thread focussed on the original issue:
>
> just add a floating array of times to irr or a new xirr
> continuous interest
> no more
>
> Anyone can use the timeseries package to produce a floating array of
> times from normal dat
I rewrote irr to use the iterative solver instead of polynomial roots
so that it can also handle large arrays. For 3000 values, I had to
kill the current np.irr since I didn't want to wait longer than 10
minutes
When writing the test, I found that npv is missing a "when" keyword,
for the case when
On Tue, May 26, 2009 at 1:55 AM, Nicolas Rougier
wrote:
>
> Hello,
>
> I've come across what is probably a bug in size check for large arrays:
>
> >>> import numpy
> >>> z1 = numpy.zeros((255*256,256*256))
> Traceback (most recent call last):
> File "", line 1, in
> ValueError: dimensions too la
Would you like to put xirr in econpy until
it finds a home in SciPy? (Might as well
make it available.)
Cheers,
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
David Cournapeau wrote:
> Francesc Alted wrote:
>> Well, it is Andrew who should demonstrate that his measurement is correct,
>> but
>> in principle, 4 cycles/item *should* be feasible when using 8 cores in
>> parallel.
>
> But the 100x speed increase is for one core only unless I misread the
>
>> The issue with OpenCL is that there will be some extensions for each
>> supported architecture, which means that the generic OpenCL will never
>> be very fast or more exactly near the optimum.
>
> what's the difference w/ OpenGL ?
> i.e. isn't the job of the "underlying" library to provide the b
On Tuesday 26 May 2009 14:08:32 Matthieu Brucher wrote:
> 2009/5/26 Gael Varoquaux :
> > On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
> >> Olivier Grisel wrote:
> >> > Also note: nvidia is about to release the first implementation of an
> >> > OpenCL runtime based on cuda. OpenCL is
2009/5/26 Gael Varoquaux :
> On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
>> Olivier Grisel wrote:
>
>> > Also note: nvidia is about to release the first implementation of an
>> > OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL
>> > but for numerical computin
On Tue, May 26, 2009 at 07:43:02AM -0400, Neal Becker wrote:
> Olivier Grisel wrote:
> > Also note: nvidia is about to release the first implementation of an
> > OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL
> > but for numerical computing on stream platforms (GPUs, Cell
Olivier Grisel wrote:
> Also note: nvidia is about to release the first implementation of an
> OpenCL runtime based on cuda. OpenCL is an open standard such as OpenGL
> but for numerical computing on stream platforms (GPUs, Cell BE, Larrabee,
> ...).
>
You might be interested in pycuda.
__
Hello,
I've come across what is probably a bug in size check for large arrays:
>>> import numpy
>>> z1 = numpy.zeros((255*256,256*256))
Traceback (most recent call last):
File "", line 1, in
ValueError: dimensions too large.
>>> z2 = numpy.zeros((256*256,256*256))
>>> z2.shape
(65536, 65536)
Also note: nvidia is about to release the first implementation of an OpenCL
runtime based on cuda. OpenCL is an open standard such as OpenGL but for
numerical computing on stream platforms (GPUs, Cell BE, Larrabee, ...).
--
Olivier
On May 26, 2009 8:54 AM, "David Cournapeau"
wrote:
Brennan Wil
Francesc Alted wrote:
>
> Well, it is Andrew who should demonstrate that his measurement is correct,
> but
> in principle, 4 cycles/item *should* be feasible when using 8 cores in
> parallel.
But the 100x speed increase is for one core only unless I misread the
table. And I should have mentione
21 matches
Mail list logo