> It's working on the buildbots. Did you remove the build directory first?
Oops. Great, all working now!
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
> I think this is now fixed in svn, I'm trying to see if static fixes the
> problem with the old buggy version. What optimization level is numpy being
> compiled with?
Still a problem here:
In [1]: import numpy as np
In [2]: np.__version__
Out[2]: '1.3.0.dev6011'
In [3]: np.log1p(np.array([1],dt
Hmmm... So I examined an objdump of umath.so:
objdump -d /usr/lib/python2.5/site-packages/numpy/core/umath.so > umath.asm
The relevant lines are here:
---
000292c0 :
292c0: e9 fb ff ff ff jmpq 292c0
292c5: 66 66 2e 0f 1f 84 00nopw %cs:0x0(%rax,%rax,1)
> My guess is that this is a libm/gcc problem on x86_64, perhaps depending on
> the flags libm was compiled with. What distro are you using?
Ubuntu 8.10 amd64
> Can you try plain old log/log10 also? I'll try to put together some c code
> you can use to check things also so that you can file a bug
> Can you try checking the functions log1p and exp separately for all three
> floating types? Something like
Well, log1p seems to be the culprit:
>>> import numpy as np
>>> np.log1p(np.ones(1,dtype='f')*3)
... hangs here ...
exp is fine:
>>> import numpy as np
>>> np.exp(np.ones(1,dtype='f')*3)
Python 2.5.2 (r252:60911, Oct 5 2008, 19:29:17)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.__version__
'1.3.0.dev6005'
>>> numpy.test(verbosity=2)
...
test_umath.TestLogAddExp.test_logaddexp_values ...
The test hangs at
> I also get the same on my 64-bit linux Fedora rawhide with
> ...
Thanks, I've submitted this as ticket #952.
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Anyone?
James
On Thu, Nov 6, 2008 at 2:53 PM, James Philbin <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I might be doing something stupid so I thought i'd check here before
> filing a bug report.
> Firstly:
> In [8]: np.__version__
> Out[8]: '1.3.0.dev5883'
Hi,
I might be doing something stupid so I thought i'd check here before
filing a bug report.
Firstly:
In [8]: np.__version__
Out[8]: '1.3.0.dev5883'
Basically, pickling an element from a recarray seems to break silently:
In [1]: import numpy as np
In [2]: dtype = [('r','f4'),('g','f4'),('b','f4
One operator which could be used is '%'. We could keep the current
behaviour for ARRAY%SCALAR but have ARRAY%ARRAY as being matrix
multiplication. It has the same precedence as *,/.
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http:
This hack for defining infix operators might be relevant:
http://code.activestate.com/recipes/384122/
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
> distances, indices = T.query(xs) # single nearest neighbor
I'm not sure if it's implied, but can xs be a NxD matrix here i.e.
query for all N points rather than just one. This will reduce the
python call overhead for large queries.
Also, I have some c++ code for locality sensitive hashing which
6, 2008 at 5:07 PM, Travis E. Oliphant
<[EMAIL PROTECTED]> wrote:
>
> James Philbin wrote:
> > OK, here's a patch for:
> > #718: Bug with numpy.float32.tolist
> >
> > Can someone commit it (I hope someone has committed the other patches
> > i
OK, here's a patch for:
#718: Bug with numpy.float32.tolist
Can someone commit it (I hope someone has committed the other patches
i've sent)?
James
--- arrayobject.c.old 2008-04-06 13:08:37.0 +0100
+++ arrayobject.c 2008-04-06 13:10:57.0 +0100
@@ -1870,8 +1870,11 @@
if (!Py
The matlab behaviour is to extend the first bin to include all data
down to -inf and extend the last bin to handle all data to inf. This
is probably the behaviour with least suprise.
Therefor, I would vote +1 for behaviour #1 by default, +1 for keeping
the old behaviour #2 around as an option and
I've posted patches for:
#630: If float('123.45') works, so should numpy.float32('123.45')
#581: random.set_state does not reset state of random.standard_normal
James
On Fri, Apr 4, 2008 at 6:49 PM, Anne Archibald
<[EMAIL PROTECTED]> wrote:
> On 04/04/2008, Travis E. Oliphant <[EMAIL PROTECTED]>
Well that's fine for binops with the same types, but it's not so
obvious which type to cast to when mixing signed and unsigned types.
Should the type of N.int32(10)+N.uint32(10) be int32, uint32 or int64?
Given your answer what should the type of N.int64(10)+N.uint64(10) be
(which is the case in th
OK, i'm really impressed with the improvements in vectorization for
gcc 4.3. It really seems like it's able to work with real loops which
wasn't the case with 4.1. I think Chuck's right that we should simply
special case contiguous data and allow the auto-vectorizer to do the
rest. Something like t
Wow, a much more varied set of results than I was expecting. Could
someone who has gcc 4.3 installed compile it with:
gcc -msse -O2 -ftree-vectorize -ftree-vectorizer-verbose=5 -S
vec_bench.c -o vec_bench.s
And attach vec_bench.s and the verbose output from gcc.
James
OK, i've written a simple benchmark which implements an elementwise
multiply (A=B*C) in three different ways (standard C, intrinsics, hand
coded assembly). On the face of things the results seem to indicate
that the vectorization works best on medium sized inputs. If people
could post the results o
> However, profiling revealed that hardly anything was gained because of
> 1) non-alignment of the vectors this _could_ be handled by
> shuffled loading of the values though
> 2) the fact that my application used relatively large vectors that
> wouldn't fit into the CPU cache, hence the me
OK, so a few questions:
1. I'm not familiar with the format of the code generators. Should I
pull the special case out of the "/** begin repeat"s or should I do a
conditional inside the repeats (how does one do this?).
2. I don't have access to Windows+VisualC, so I will need some help
testing for
> gcc keeps advancing autovectorization. Is manual vectorization worth the
> trouble?
Well, the way that the ufuncs are written at the moment,
-ftree-vectorize will never kick in due to the non-constant strides.
To get this to work, one has to special case out unary strides. Even
with constant
I'm not sure that #669
(http://projects.scipy.org/scipy/numpy/ticket/669) is a bug, but
probably needs some discussion (see the last reply on that page). The
cast is made because we don't know that the LHS is non-negative.
However it could be argued that operations involving two integers
should nev
Personally, I think that the time would be better spent optimizing
routines for single-threaded code and relying on BLAS and LAPACK
libraries to use multiple cores for more complex calculations. In
particular, doing some basic loop unrolling and SSE versions of the
ufuncs would be beneficial. I hav
Hi,
> More importantly, it is technically impossible because of the way that
> *Python* works. See the thread "Histograms via indirect index arrays"
> for a detailed explanation.
>
> http://projects.scipy.org/pipermail/numpy-discussion/2006-March/006877.html
OK, that makes things much clearer
Hi,
> This cannot work, because the inplace operation does not
> take place as a for loop.
Well, this would be fine if I was assigning the values to tempories as
you suggest. However, the operation should be performed inplace and
this is what I don't understand - why is there no for loop? I thin
Hi,
I was suprised to see this result:
>>> import numpy as N
>>> A = N.array([0,0,0])
>>> A[[0,1,1,2]]+=1
>>> A
array([1, 1, 1])
Is this expected? Working on the principle of least surprise I would
expect [1,2,1] to be output.
Thanks,
James
___
Numpy-d
Hi,
I was suprised to see this result:
>>> import numpy as N
>>> A = N.array([0,0,0])
>>> A[[0,1,1,2]]+=1
>>> A
array([1, 1, 1])
Is this expected? Working on the principle of least surprise I would
expect [1,2,1] to be output.
Thanks,
James
___
Numpy-d
> Try out latest SVN. It should have this problem fixed.
Thanks for this. I've realized that for my case, using object arrays
is probably best. I still think that long term it would be good to
allow comparison functions to take different types, so that one could
compare say integer arrays with flo
:
>
>
>
>
> On Jan 31, 2008 10:55 AM, James Philbin <[EMAIL PROTECTED]> wrote:
> >
> > > True. The problem is knowing when that is the case. The subroutine in
> > > question is at the bottom of the heap and don't know nothin'. IIRC, it
>
> True. The problem is knowing when that is the case. The subroutine in
> question is at the bottom of the heap and don't know nothin'. IIRC, it just
> sits there and does the comparison by calling through a pointer with char*
> arguments.
What does the comparison function actually look like for t
Well, i've digged around in the source code and here is a patch which
makes it work for the case I wanted:
--- multiarraymodule.c.old 2008-01-31 17:42:32.0 +
+++ multiarraymodule.c 2008-01-31 17:43:43.0 +
@@ -2967,7 +2967,10 @@
char *parr = arr->data;
char *
Hi,
> In particular:
>
> * All arrays are assumed contiguous on entry and both arr and key must be
> of<-
> * the same comparable type. <-
In which case, this seems to be an overly strict implementation of
searchsorted. Surely all that should be required is that the
comparison functi
Hi,
Just tried with numpy from svn and still get this problem:
>>> import numpy
>>> numpy.__version__
'1.0.5.dev4763'
>>> A = numpy.array(['a','aa','b'])
>>> B = numpy.array(['d','e'])
>>> A.searchsorted(B)
array([3, 0])
I guess this must be a platform-dependent bug. I'm running python version:
P
Hmmm. Just downloaded and installed 1.0.4 and i'm still getting this
error. Are you guys using the bleeding edge version or the official
1.0.4 tarball from the webpage?
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.sc
Hi,
OK, i'm using:
In [6]: numpy.__version__
Out[6]: '1.0.3'
Should I try the development version? Which version of numpy would
people generally recommend?
James
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mai
Hi,
The following gives the wrong answer:
In [2]: A = array(['a','aa','b'])
In [3]: B = array(['d','e'])
In [4]: A.searchsorted(B)
Out[4]: array([3, 0])
The answer should be [3,3]. I've come across this while trying to come
up with an ismember function which works for strings (setmember1d
does
Hi,
The following gives the wrong answer:
In [2]: A = array(['a','aa','b'])
In [3]: B = array(['d','e'])
In [4]: A.searchsorted(B)
Out[4]: array([3, 0])
The answer should be [3,3]. I've come across this while trying to come
up with an ismember function which works for strings (setmember1d
does
39 matches
Mail list logo