A Wednesday 09 April 2008, Stéfan van der Walt escrigué:
> On 09/04/2008, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > Well, I agree that Greg Ewing (the Pyrex creator) has possibly not
> > been very speedy in adding the suggested patches (Greg has his own
> > thoughts
A Wednesday 09 April 2008, Fernando Perez escrigué:
> On Wed, Apr 9, 2008 at 12:25 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > I don't expect you having problems in that regard either.
> > However, I've been having problems compiling perfectly valid Pyrex
&g
A Wednesday 09 April 2008, Francesc Altet escrigué:
> A Wednesday 09 April 2008, Andrew Straw escrigué:
> > This is off-topic and should be directed to the pyrex/cython list,
> > but since we're on the subject:
> >
> > I suppose the following is true, but let m
piler. I just haven't had time to locate where the problem
is and report it, but people using Pyrex and planning to migrate to
Cython must be aware that these sort of things happen.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Da
ever be included in NumPy/SciPy.
It looks like a compiler, so having the same licensing than GCC
shouldn't bother the NumPy community, IMO.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
_
ve those
> kids access to simple financial calculations for learning about
> time-preference for money and the value of saving.
Yeah :) +1 for including these in NumPy.
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
A Sunday 23 March 2008, Francesc Altet escrigué:
> A Sunday 23 March 2008, Anne Archibald escrigué:
> > On 23/03/2008, Damian Eads <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> > > I am working on a memory-intensive experiment with very large
> &g
he output will require new space. The usage should be
something like:
In [11]: y = numpy.random.normal(0, 10, 10)
In [12]: numexpr.evaluate('where(y<0, -1, y)')
Out[12]:
array([ 7.11784295, -1., 10.92876842, -1.,
0.76092629, -1., 14.07021
see a venerable Pentium 4 running this code 2x faster
than a powerful AMD Opteron for small datasets (<1), and with
similar speed than recent Core2 processors. I suppose the first level
cache in Pentiums is pretty fast.
Cheers,
--
Francesc Al
SSE instructions
(not sure whether SSE supports this sort of operations, though).
At any rate, this is exactly the kind of parallel optimizations that
make sense in Numexpr, in the sense that you could obtain decent
speedups with multicore processors.
Cheers,
--
>0,0< Francesc Altet
ure whether this kind of optimizations for small datasets would be
very useful in practice (read general NumPy calculations), but I'm
rather sceptical about this.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
_
operations like 'a+b').
In a similar way, OpenMP (or whatever parallel paradigm) will only
generally be useful when you have to deal with lots of data, and your
algorithm can have the oportunity to structure it so that small
portions of them can be reused
A Tuesday 11 March 2008, Charles R Harris escrigué:
> On Tue, Mar 11, 2008 at 4:00 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > A Tuesday 11 March 2008, Francesc Altet escrigué:
> > > The thing that makes uint64 so special is that it is the largest
> > > int
->int64 and int-->uint64), as has
been suggested by Timothy Hochberg in the NumPy list, and adopting
modular arithmetic for dealing with overflows/underflows is probably
the most sensible solution. I don't know how difficult it would be to
implem
A Tuesday 11 March 2008, Francesc Altet escrigué:
> The thing that makes uint64 so special is that it is the largest
> integer (in current processors) that has a native representation
> (i.e. the processor can operate directly on them, so they can be
> processed very fast), and besid
A Monday 10 March 2008, Charles R Harris escrigué:
> On Mon, Mar 10, 2008 at 11:08 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > Hi,
> >
> > In order to allow in-kernel queries in PyTables (www.pytables.org)
> > work with unsigned 64-bit integers, we would
an 'isolated' type (much like a string
type).
We are mostly inclined to implement 2) behaviour, but before proceed,
I'd like to know what other people think about this.
Thanks,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-
them, PyTables simply would not exist.
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjo
27;uint64')
Out[89]: 0.5
shouldn't this be array(0.5)?
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
I would like to be able to
> handle them just as I do any other 1-dimensional array. I don't know
> if a length of 1 would be valid, given a shape of (), but there must
> be some consistent way of handling them.
If 0-d arrays are going to be indexab
un-indexable, mainly because it would be useful to raise an
error when you are trying to index them. In fact, I thought that when
you want a kind of scalar but indexable, you should use a 0-d array.
So, my vote is -0.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cá
27;)
In [36]: ad[0] = a[0]
In [37]: ad[1:] = a[1:] - a[:-1]
In [38]: ad
Out[38]: array([0, 1, 1, 1, 1, 1, 1, 1, 1, 1])
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy
A Friday 15 February 2008, Charles R Harris escrigué:
> On Fri, Feb 15, 2008 at 5:09 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > Hi Chuck,
> >
> > I've given more testing to the new quicksort routines for strings
> > in the forthcoming NumPy
A Thursday 14 February 2008, Charles R Harris escrigué:
> On Thu, Feb 14, 2008 at 1:03 PM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > Maybe I'd also be interested in trying insertion sort out. During
> > the optimization process of an OPSI index, there is a need
mergesort
and heapsort of the new implementation, and I'm happy to say to
everything went very smoothly, i.e. more than 1000 tests with different
input arrays has passed flawlessly. Good job!
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos
A Thursday 14 February 2008, Charles R Harris escrigué:
> On Thu, Feb 14, 2008 at 11:44 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > A Thursday 14 February 2008, Charles R Harris escrigué:
> > > On Thu, Feb 14, 2008 at 10:46 AM, Francesc Altet
> > >
A Thursday 14 February 2008, Charles R Harris escrigué:
> On Thu, Feb 14, 2008 at 10:46 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > Looking forward to see the new qsort for strings in NumPy (the
> > specific version for merge sort is very welcome too!).
>
> I coul
t with Python style compare: 0.96
> NumPy newqsort: 0.53
That's excellent Bruce. Definitely it looks like the problem with the
optimizer in 4.2.1 has been fixed in 4.2.3.
And why you haven't used optimization flags with ICC? just curious...
Cheers,
--
>0,0< Fr
A Thursday 14 February 2008, Charles R Harris escrigué:
> On Thu, Feb 14, 2008 at 9:11 AM, Francesc Altet <[EMAIL PROTECTED]>
wrote:
> > From the plot (attached), it can be drawn the next conclusions:
> >
> > 1) copy_string2 (the combination of manual copy and mem
A Wednesday 13 February 2008, Charles R Harris escrigué:
> On Feb 13, 2008 10:56 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > Be warned, I'd like to stress out that these are my figures for my
> > _own laptop_. It would be nice if you can verify all of this with
&g
problem
with the optimizer introduced in 4.2.1. Very good!
By the way, it's nice to see the wide range of platforms that this list
allows to test out :-)
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
__
A Wednesday 13 February 2008, Scott Ransom escrigué:
> On Wednesday 13 February 2008 02:37:37 pm Francesc Altet wrote:
> > So, I'd say that the guilty is the gcc 4.2.1, 64-bit (or at very
> > least, AMD Opteron architecture) and that newqsort performs really
> > well in
A Wednesday 13 February 2008, Francesc Altet escrigué:
> A Wednesday 13 February 2008, Bruce Southey escrigué:
> > Hi,
> > I added gcc 4.2 from the openSUSE 10.1 repository so I now have
> > both the 4.1.2 and 4.2.1 compilers installed. But still have
> > glibc-2.4
s (your Core2 machine seems different enough). I can run
the benchmarks on Windows (installed in the same laptop) too. Tell me
if you are interested on me doing this.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
o, it is becoming more and more clear that newqsort is potentially much
faster that C qsort: you have seen a 2x of speedup, Chuck a 3x and me
up to a 3.8x. The only issue seems to find a good enough compiler (or
find the correct flags) to take advantage of all of its potential
rformance very recently (i.e. I hope it is not only a fix in SuSe
10.3 Enterprise), and this is why most of current distros are seeing
the poor performance in qsort.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
_
A Tuesday 12 February 2008, Charles R Harris escrigué:
> On Feb 12, 2008 9:07 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > * The newqsort performs the best on all the platforms we have
> > checked (ranging from a 5% of improvement on Opteron/SuSe, up to
> > 3.8
The newqsort performs the best on all the platforms we have checked
(ranging from a 5% of improvement on Opteron/SuSe, up to 3.8x with some
Pentium4/Ubuntu systems).
All in all, I'd also say that newqsort would be a good candidate to be
put into NumPy.
Cheers,
--
>0,0< Francesc
A Monday 11 February 2008, Charles R Harris escrigué:
> On Feb 11, 2008 1:15 PM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > Here are the results of running it in several platforms:
> >
> > 1) My laptop: Ubuntu 7.1 (gcc 4.1.3, Pentium 4 @ 2 GHz)
> > Benchmar
l behave well in your platform too (i.e.
newqsort will perform the best ;)
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
#include
#include
#include
#define NUM 100
#define LEN 15
/* This cannot be inlined by MSV
A Monday 11 February 2008, Francesc Altet escrigué:
> A Monday 11 February 2008, Charles R Harris escrigué:
> > That's with the current sort(kind='q') in svn, which uses the new
> > string compare function but is otherwise the old default quicksort.
> > The new
A Monday 11 February 2008, Charles R Harris escrigué:
> On Feb 8, 2008 5:29 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I'm a bit confused that the sort method of a string character
> > doesn't
> >
> > allow a mergesort:
ocstrings, so
that the user wanting for extreme performance can use this approach if
he wants to. Still, for string sizes greater than, say, 1000, well, an
automatic selection of the indirect method is very tempting indeed.
Cheers,
--
>0,0< Francesc Altet http://www.cara
A Saturday 09 February 2008, Charles R Harris escrigué:
> On Feb 9, 2008 2:07 PM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > A Saturday 09 February 2008, Charles R Harris escrigué:
> > > > So, strncmp1 is not only faster than its C counterpart, but
> > >
earch on this, I think
it would be safer if you use system memcpy for string sorting in NumPy.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
cmp, is incorrect and needs to be fixed. It seems that
> strings with zeros are not part of the current test series ;)
Yeah, that's right. And yes, it would be advisable to have at least a
couple of tests having zeros interspersed throughout the string.
Cheers,
--
>0,0< Francesc
hon/NumPy the end is defined by a length
property (btw, the same than in Pascal, if you know it).
So, strncmp1 is not only faster than its C counterpart, but also the one
doing the correct job with NumPy (unicode) strings.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V
A Friday 08 February 2008, Charles R Harris escrigué:
> On Feb 8, 2008 10:31 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > A Friday 08 February 2008, Francesc Altet escrigué:
> > > A Friday 08 February 2008, Charles R Harris escrigué:
> > > > > Also, in
A Friday 08 February 2008, Charles R Harris escrigué:
> On Feb 8, 2008 8:58 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
> > A Friday 08 February 2008, Charles R Harris escrigué:
> > > > Also, in the context of my work in indexing, and because of the
> > > >
A Friday 08 February 2008, Francesc Altet escrigué:
> A Friday 08 February 2008, Charles R Harris escrigué:
> > > Also, in the context of my work in indexing, and because of the
> > > slowness of the current implementation in NumPy, I've ended with
> > > an imp
> NumPy developers, tell me and I will provide the code.
>
> I have some code for this too and was going to merge it. Send yours
> along and I'll get to it this weekend.
Ok, great. I'm attaching it. Tell me if you need some clarification on
the code.
Cheers,
--
>0,0<
ing and copy. If this is of interest for NumPy developers,
tell me and I will provide the code.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
mples in a range, but without using an
intermediate array to create it (memory footprint is important).
I know that I can create this with a loop, but I'm curious if that can
be done more compactly in one single sentence.
Thanks,
--
>0,0< Francesc Altet http://www.carabos.com/
A Thursday 31 January 2008, Francesc Altet escrigué:
> A Wednesday 30 January 2008, Timothy Hochberg escrigué:
> > [...a fine explanation by Anne and Timothy...]
>
> Ok. As it seems that this subject has interest enough, I went ahead
> and created a small document about
hear some comments about this proposal from the community.
At any rate, I've placed the "ViewsVsCopies" entry under a new section
that I created in the cookbook, that I called "Advanced topics", but
I'm open to suggestions on a better name/place.
Chee
= R[1:3,:][:,1:3]
In [70]: S[:] = 2
In [71]: R
Out[71]:
array([[0, 1, 2],
[3, 2, 2],
[6, 2, 2]])
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
er can reproduce your problem.
Perhaps you are suffering some other problem that can be exposed by
this code snip.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discu
x27;m not sure whether this object would be much better than a
dictionary of homogeneous arrays.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing l
A Monday 07 January 2008, Nils Wagner escrigué:
> On Mon, 7 Jan 2008 19:42:40 +0100
>
> Francesc Altet <[EMAIL PROTECTED]> wrote:
> > A Monday 07 January 2008, Nils Wagner escrigué:
> >> >>> numpy.sqrt(numpy.array([-1.0],
> >>
> >>dtype
;
>
> >>> numpy.__version__
>
> '1.0.5.dev4673'
It seems like you are using a 64-bit platform, and they tend to have
complex256 (quad-precision) types instead of complex192
(extended-precision) typical in 32-bit platforms.
Cheers,
--
s
seems your case) in OS page cache. Then, the second time that your code has to
read the data, the OS only have to retrieve it from its cache (i.e. in memory)
rather than from disk.
You can do this with whatever technique you want, but if you are after reading
from a single container and memmap
around(scale*data)/scale
[1] http://www.pytables.org/docs/manual/ch05.html#compressionIssues
[2] http://www.pytables.org/docs/manual/ch05.html#ShufflingOptim
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
I've always felt that numexpr belongs more to
numpy itself than scipy. However, I agree that perhaps it should be a
bit more polished (but not much; perhaps just adding some functions
like, exp, log, log10... would be enough) before being integrated. At
any rate, Numexpr would be a extre
e:
http://scipy.org/scipy/scipy/ticket/529
However, I've committed a couple of mistakes during ticket creation, so:
- in #529, I've uploaded the patch twice, so they are exactly the same
- please mark #530 as invalid (duplicated)
Cheers,
--
&
A Wednesday 31 October 2007, Timothy Hochberg escrigué:
> On Oct 31, 2007 3:18 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:
>
> [SNIP]
>
> > Incidentally, all the improvements of the PyTables flavor of
> > numexpr have been reported to the original authors, bu
from:
http://www.pytables.org/trac/browser/trunk/tables/numexpr/
Or, by installing PyTables 2.x series and importing numexpr as:
"from tables import numexpr"
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
E
umber -> numpy.Number
>- numpy.inexact -> numpy.Inexact
>- numpy.floating -> numpy.Floating
>- numpy.float32 stays the same
>
> This is probably a lot less painful in terms of backwards
> compatibility.
Yeah. I second this also.
--
>0,0< Francesc Alt
e them as they are now too. In addition to what
Robert is saying, they are very heavily used in regular NumPy programs,
and changing everything (both in code and docs) would be rather messy.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V.
.
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion ma
]
print n, [min(timer.repeat(3,1000)) for timer in timers]
gives in my machine:
10 [0.03213191032409668, 0.012019872665405273, 0.0068600177764892578]
100 [0.033048152923583984, 0.06542205810546875, 0.0076580047607421875]
1000 [0.040294170379638672, 0.59892702102661133, 0.01460099220275
TION', ' > ' >
> > Why does the casting using tuple() not work while cut-and-paste of
> > the a[1] record into a new variable works just fine?
>
> I answered part of the question myself. In the coercion back to
> tuple from a record, the datatypes remain nump
nique columns instead of rows, do a tranpose first
on the initial array.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
Numpy-discussion@scipy.o
Hi,
This has been sent to the [EMAIL PROTECTED] list, but it should of
interest to NumPy/SciPy lists too. Remember that you can access most of
the HDF5 files from Python by using PyTables.
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjo
he
python version that Vincent was benchmarking), so this shouldn't make a big
difference compared with other relational databases.
[1] http://www.sqlite.org/datatype3.html
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V
ently not aware
of any package taking this approach.
[1] http://www.pytables.org/trac/browser/trunk/bench/postgres_backend.py
[2] http://thread.gmane.org/gmane.comp.python.numeric.general/9704
[3] http://www.carabos.com/docs/OPSI-indexes.pdf
Cheers,
--
>0,0< Francesc Al
but with real data the
compression levels are expected to be higher than this.
[1] http://www.pytables.org/docs/manual/ch05.html#expectedRowsOptim
Cheers,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
load_tables_test.py
De
El dt 03 de 07 del 2007 a les 10:34 +0200, en/na Sebastian Haase va
escriure:
> Thanks for the reply.
> Rethinking the question ... wasn't there an attribute named something like:
> is_native()
> ??
In [3]:a.dtype.isnative
Out[3]:True
:)
--
Francesc Altet| Be carefu
;
> > then, even on a little endian system
> > the comparison arr.dtype.byteorder == "<" still fails !?
> > Or are the == and != operators overloaded !?
No, this will fail. The == and != are not overloaded because
dtype.byteorder is a pure python string:
In [
eys),dtype=tuple)
> >>> key_array[:] = keys[:]
> >>> key_array
> array([('a', 1), ('b', 2)], dtype=object)
Ops. You are right. I think I was fooled by the 'dtype=tuple' argument
which is in fact equivalent to 'dtype=o
; data_array[:] = data[inds]
> key_array[:] = keys[inds]
Yeah, much simpler than my first approach.
Cheers,
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V. | I've only proven that it works,
www.carabos.com | I haven't tested it. -- Don
El dc 20 de 06 del 2007 a les 01:38 -0700, en/na Michael McNeil Forbes
va escriure:
> Hi,
>
> I have a list of tuples that I am using as keys and I would like to
> sort this along with some other arrays using argsort. How can I do
> this? I would like to do something like:
>
> # These are c
not be performed by byteswap()?
No. From the Travis' "Guide to NumPy":
"""
byteswap ({False})
Byteswap the elements of the array and return the byteswapped array. If
the argument is True, then byteswap in-place and return a reference to
self. Otherwise, return a cop
em PyTables simply would not exist.
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
**Enjoy data!**
-- The PyTables Team
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V. |
ce*(long *)reduce_ptr
#define f_reduce*(double *)reduce_ptr
Cheers,
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V. | I've only proven that it works,
www.carabos.com | I haven't tested it. -- Donald Knuth
A Dijous 24 Maig 2007 20:33, Francesc Altet escrigué:
> Hi,
>
> Some time ago I made an improvement in speed on the numexpr version of
> PyTables so as to accelerate the operations with unaligned arrays
> (objects that can appear quite commonly when dealing with columns of
> reca
reaffirm me in the possibility that something is wrong with my
copy code above (but again, I can't see where).
Of course, we can get rid of this optimization but it is a bit
depressing to have renounce to it just because it doesn't work on
Windows :(
Thanks in advance for any hint you
Yeah, that's probably the cause of the malfunction in 64-bit processors.
I've also been bitten by this:
http://projects.scipy.org/pipermail/numpy-discussion/2006-November/024428.html
So, Numeric is currently unsable for Python 2.5 and 64-bit platforms and
probably will allways be that wa
ut[24]:
In [25]:type(numpy.float64(3.)) # a numpy scalar
Out[25]:
Cheers,
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V. | I've only proven that it works,
www.carabos.com | I haven't tested it. -- Donald Knuth
.dtype.descr[0][1][0] == '>')
isBigEndian = (arr.dtype.str[0] == '>')
is a little bit shorter. A more elegant approach could be:
isBigEndian = arr.dtype.isnative ^ numpy.little_endian
Cheers,
--
Francesc Altet| Be careful about using the following code --
Carab
ches).
So, at least, this leads to the conclusion that the numexpr's virtual machine
is still far away from getting overloaded, most specially with nowadays
processors with 512KB of secondary cache or more.
Cheers,
A Dimecres 14 Març 2007 22:05, Francesc Altet escrigué:
> Hi,
>
>
792480469, 0.25554895401000977]
I've used here the numexpr instance that comes with PyTables, but you
can use also the one in the scipy's sandbox as it also supports booleans
since some months ago.
Cheers,
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V
I think that this function
> > ranks pretty highly in convenience.
>
> I'm supportive of this. But, it can't be named numpy.load.
>
> How about
>
> numpy.loadtxt
> numpy.savetxt
+1
--
>0,0< Francesc Al
can do with NumPy:
In [3]:a=numpy.array([[1,2],[3,4]])
In [4]:a
Out[4]:
array([[1, 2],
[3, 4]])
In [5]:numpy.sqrt(a)
Out[5]:
array([[ 1., 1.41421356],
[ 1.73205081, 2.]])
Salut,
--
Francesc Altet| Be careful about using the following code --
Cara
our code has to deal with highly multidimensional objects.
Finally, don't let benchmarks fool you. If you can, it is always better
to run your own benchmarks made of your own problems. A tool that can be
killer for one application can be just mediocre for another (that's
somewhat extreme,
helped to make and distribute this
package! And last, but not least thanks a lot to the HDF5 and NumPy
(and numarray!) makers. Without them PyTables simply would exists.
Share your experience
=====
Let us know of any bugs, suggestions, gripes, kudos,
t sorting, lookup, data selection...) and packaging
this in a single, compact package could be a really great contribution to the
Python community in general.
Just some thoughts,
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
h.
Travis has fixed this tonight :)
> Just trying to clean stuff up ;)
Of course, good idea!
--
>0,0< Francesc Altet http://www.carabos.com/
V V Cárabos Coop. V. Enjoy Data
"-"
___
Numpy-discussion mailing list
Numpy-di
tring or int/float objects), and this is very ineficient
when you have to retrieve large datasets from databases. IMHO, database
developers should learn about this capability of NumPy for the sake of
achieving efficient databases interfaces for Python.
Cheers,
--
>0,0< Francesc Altet
El dg 01 de 04 del 2007 a les 21:35 +0200, en/na Francesc Altet va
escriure:
> El ds 31 de 03 del 2007 a les 21:54 -0600, en/na Travis Oliphant va
> escriure:
> > I'm going to be tagging the tree for the NumPy 1.0.2 release tomorrow
> > evening in preparation for the r
or us,
it would be nice if it can be added to NumPy 1.0.2 final.
Good night!
--
Francesc Altet| Be careful about using the following code --
Carabos Coop. V. | I've only proven that it works,
www.carabos.com | I haven't tested it. -- Donald Knuth
_
1 - 100 of 159 matches
Mail list logo