Re: [Numpy-discussion] test_multiarray.test_clip fails on Solaris 8 system

2007-04-02 Thread Andrew Jaffe

> The following test fails on a Solaris 8 system:
> ==
> FAIL: check_basic (numpy.core.tests.test_multiarray.test_clip)
> --
> Traceback (most recent call last):
>  [removed]
> "/data/basil5/site-packages/lib/python/numpy/testing/utils.py", line
> 143, in assert_equal
> assert desired == actual, msg
> AssertionError:
> Items are not equal:
> ACTUAL: '<'
> DESIRED: '='
> 
> Hmm, Sun hardware is big endian, no? I wonder what happens on PPC? I 
> don't see any problems here on Athlon64.

Indeed, I get the same problem on a PPC running OSX 10.4

Andrew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] flatten() without copy - is this possible?

2007-06-04 Thread Andrew Jaffe
dmitrey wrote:
> hi all.
> in the numpy for matlab users I read
> 
> y = x.flatten(1)
> 
> turn array into vector (note that this forces a copy)
> 
> Is there any way to do the trick wthout copying?
> What are the problems here? Just other way of array elements indexing...

One important question is whether you actually need the new vector, or 
whether you just want a flat index into the array; if the latter, you 
can always [I think] use x.flat[one_d_index]. (But note that y=x.flat 
gives an iterator, not a new array.)

Andrew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-07-02 Thread Andrew Jaffe
This is slightly off-topic, but probably of interest to anyone reading 
this thread:

Is there any reason why we don't use --fcompiler=gfortran as an alias 
for --fcompiler=gfortran (and --fcompiler=g77 as an alias for 
--fcompiler=gnu, for that matter)?

Those seem to me to be much more mnemonic names...

Andrew



Robert Kern wrote:
> Christopher Hanley wrote:
>> I have found that setting my F77 environment variable to gfortran is 
>> also sufficient.
>>
>>  > setenv F77 gfortran
>>  > python setup.py install
> 
> That might work okay for building scipy and other packages that only actually
> have FORTRAN-77 code; however, I suspect that Matthew is trying to build
> something with Fortran 90+.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I configure with gfortran

2007-07-02 Thread Andrew Jaffe
I wrote:

> This is slightly off-topic, but probably of interest to anyone reading 
> this thread:
> 
> Is there any reason why we don't use --fcompiler=gfortran as an alias 
> for --fcompiler=gfortran (and --fcompiler=g77 as an alias for 
> --fcompiler=gnu, for that matter)?

But, sorry, of course this should be:

   Is there any reason why we don't use --fcompiler=gfortran as an alias
   for --fcompiler=gnu95?

> Those seem to me to be much more mnemonic names...

Andrew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.6.0 [Problems on OS X w/ Python 2.6 and XCode 4]

2011-05-16 Thread Andrew Jaffe
Dear all,

I have OSX 10.6.7, XCode 4, and Python.org python 2.6.6 and 2.7.1, where 
2.7 is 64-bit.

With 2.7, easy_install successfully compiles and installs the package, 
both over the web and with an explicit download.

With 2.6, there seems to be a problem with attempting to compile the PPC 
architecture, seen in this error.

/usr/libexec/gcc/powerpc-apple-darwin10/4.0.1/as: assembler 
(/usr/bin/../libexec/gcc/darwin/ppc/as or 
/usr/bin/../local/libexec/gcc/darwin/ppc/as) for architecture ppc not 
installed

If I explicitly set CFLAGS and LDFLAGS to have "-arch i386 -arch x86_64" 
in them as appropriate, it seems to work...

Is this a correctable bug in the packaging, or just a quirk of my setup?

Andrew


On 15/05/2011 17:39, Ralf Gommers wrote:
>
>
> On Sat, May 14, 2011 at 3:09 PM, Charles R Harris
> mailto:charlesr.har...@gmail.com>> wrote:
>
>
>
> On Sat, May 14, 2011 at 3:54 AM, Ralf Gommers
> mailto:ralf.gomm...@googlemail.com>>
> wrote:
>
> Hi,
>
> I am pleased to announce the release of NumPy 1.6.0. This
> release is the result of 9 months of work, and includes many new
> features, performance improvements and bug fixes. Some
> highlights are:
>
>- Re-introduction of datetime dtype support to deal with
> dates in arrays.
>- A new 16-bit floating point type.
>- A new iterator, which improves performance of many functions.
>
>
> The link is http://sourceforge.net/projects/numpy/files/NumPy/1.6.0/
>
> The OS X binaries are also up now.
>
> Ralf
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.6.0 [Problems on OS X w/ Python 2.6 and XCode 4]

2011-05-16 Thread Andrew Jaffe
On 16/05/2011 18:45, Ralf Gommers wrote:
>
> On Mon, May 16, 2011 at 12:41 PM, Andrew Jaffe  <mailto:a.h.ja...@gmail.com>> wrote:
>
> Dear all,
>
> I have OSX 10.6.7, XCode 4, and Python.org python 2.6.6 and 2.7.1, where
> 2.7 is 64-bit.
>
> With 2.7, easy_install successfully compiles and installs the package,
> both over the web and with an explicit download.
>
> With 2.6, there seems to be a problem with attempting to compile the PPC
> architecture, seen in this error.
>
> Please just use "python setup.py install", easy_install is very unreliable.
>
>
> /usr/libexec/gcc/powerpc-apple-darwin10/4.0.1/as: assembler
> (/usr/bin/../libexec/gcc/darwin/ppc/as or
> /usr/bin/../local/libexec/gcc/darwin/ppc/as) for architecture ppc not
> installed
>
> XCode 4 does not support PPC, use version 3.2. Then it should work as
> advertized.

Aha, thanks!

But they're both installed, but how do force this? (Does 64-bit 2.7 not 
care about PPC?)

Yours,

Andrew


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting non-normalized eigenvectors from generalized eigenvalue solution?

2011-12-21 Thread Andrew Jaffe
Just to be completely clear, there is no such thing as a 
"non-normalized" eigenvector. An eigenvector is only determined *up to a 
scalar normalization*, which is obvious from the eigenvalue equation:

A v = l v

where A is the matrix, l is the eigenvalue, and v is the eigenvector. 
Obviously v is only determined up to a constant factor. A given eigen 
routine can return anything at all, but there is no native 
"non-normalized" version.

Traditionally, you can decide to return "normalized" eigenvectors with 
the scalar factor determined by norm(v)=1 for some suitable norm. (I 
could imagine that an algorithm could depend on that.)

Andrew


On 21/12/2011 07:01, Olivier Delalleau wrote:
 > Aaah, thanks a lot Lennart, I knew there had to be some logic to
 > Octave's output, but I couldn't see it...
 >
 > -=- Olivier
 >
 > 2011/12/21 Lennart Fricke  >
 >
 > Dear Fahreddın,
 > I think, the norm of the eigenvectors corresponds to some generic
 > amplitude. But that is something you cannot extract from the 
solution of
 > the eigenvalue problem but it depends on the initial deflection or
 > velocities.
 >
 > So I think you should be able to use the normalized values just 
as well
 > as the non-, un- or not normalized ones.
 >
 > Octave seems to normalize that way that, transpose(Z).B.Z=I, 
where Z is
 > the matrix of eigenvectors, B is matrix B of the generalized 
eigenvalue
 > problem and I is the identity. It uses lapack functions. But 
that's only
 > true if A,B are symmetric. If not it normalizes the magnitude of 
largest
 > element of each eigenvector to 1.
 >
 > I believe you can get it like that. If U is a Matrix with 
normalization
 > factors it is diagonal and Z.A contains the normalized column 
vectors.
 > then it is:
 >
 >   transpose(Z.A).B.Z.A
 > =transpose(A).transpose(Z).B.Z.A
 > =A.transpose(Z).B.Z.A=I
 >
 > and thus invert(A).invert(A)=transpose(Z).B.Z
 > As A is diagonal invert(A) has the reciprocal elements on the 
diagonal.
 > So you can easily extract them
 >
 > A=diag(1/sqrt(diag(transpose(Z).B.Z)))
 >
 > I hope that's correct.
 >
 > Best Regards
 > Lennart
 >
 > ___
 > NumPy-Discussion mailing list
 > NumPy-Discussion@scipy.org 
 > http://mail.scipy.org/mailman/listinfo/numpy-discussion
 >
 >
 >
 >
 > ___
 > NumPy-Discussion mailing list
 > NumPy-Discussion@scipy.org
 > http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Raveling, reshape order keyword unnecessarily confuses index and memory ordering

2013-04-02 Thread Andrew Jaffe

>>
>> Proposal
>> -
>>
>> * Deprecate the use of "C" and "F" meaning backwards and forwards
>> index ordering for ravel, reshape
>> * Prefer "Z" and "N", being graphical representations of unraveling in
>> 2 dimensions, axis1 first and axis0 first respectively (excellent
>> naming idea by Paul Ivanov)
>>
>> What do y'all think?
>>
>
> Personally I think it is clear enough and that "Z" and "N" would confuse
> me just as much (though I am used to the other names). Also "Z" and "N"
> would seem more like aliases, which would also make sense in the memory
> order context.
> If anything, I would prefer renaming the arguments iteration_order and
> memory_order, but it seems overdoing it...
> Maybe the documentation could just be checked if it is always clear
> though. I.e. maybe it does not use "iteration" or "memory" order
> consistently (though I somewhat feel it is usually clear that it must be
> iteration order, since no numpy function cares about the input memory
> order as they will just do a copy if necessary).

I have been using both C and Fortran for 25 or so years. Despite that, I 
have to sit and think every time I need to know which way the arrays are 
stored, basically by remembering that in fortran you do (I,J,*) for an 
assumed-size array.

So I *love* the idea of 'Z' and 'N' which I understood immediately.

Andrew




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dropping support for, Accelerate/veclib?

2013-06-13 Thread Andrew Jaffe
On 11/06/2013 22:11, Chris Barker - NOAA Federal wrote:
> On Tue, Jun 11, 2013 at 1:28 PM, Ralf Gommers  wrote:
>> The binaries will still be built against python.org Python, so there
>> shouldn't be an issue here. Same for building from source.
>
> My point was that it's nice to be able to have it build with an out of
> teh box wetup.py with accelerated LAPACK and all... If whoever is
> building binaries wants to get fancy, great.

Yes, please. The current system does seem to work for at least some of 
us.  And, if I understand the thread in the scipy mailing list, it's not 
actually clear that there's a bug, as opposed to incompatible fortran 
ABIs (which doesn't seem like a bug to me).

But I guess the most important thing would be that it can be used with 
apple or python.org Python builds (my reading of some of the suggestions 
would be requiring one of homebrew/fink/macports), preferably 
out-of-the-box -- even if that meant restricting to prebuilt binaries. 
Being able to run non-obscure installers (i.e., from the main python.org 
and scipy.org sites) for Python + numpy + scipy + matplotlib and get 
optimized versions would be sufficient.


Andrew

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [RFC] should we argue for a matrix power operator, @@?

2014-03-19 Thread Andrew Jaffe
On 16/03/2014 01:31, josef.p...@gmail.com wrote:
>
>
>
> On Sat, Mar 15, 2014 at 8:47 PM, Warren Weckesser
> mailto:warren.weckes...@gmail.com>> wrote:
>
>
> On Sat, Mar 15, 2014 at 8:38 PM,  > wrote:
>
> I think I wouldn't use anything like @@ often enough to remember
> it's meaning. I'd rather see english names for anything that is
> not **very** common.
>
> I find A@@-1 pretty ugly compared to inv(A)
> A@@(-0.5)  might be nice   (do we have matrix_sqrt ?)
>
>
>
> scipy.linalg.sqrtm:
> 
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.sqrtm.html
>
>
> maybe a good example: I could never figured that one out
>
> M = sqrtm(A)
>
> A = M @ M
>
> but what we use in stats is
>
> A = R.T @ R
> (eigenvectors dot diag(sqrt of eigenvalues)
>
> which sqrt is A@@(0.5) ?
>
> Josef

Agreed-  In general, "the matrix square root" isn't a well-defined 
quantity. For some uses, the Cholesky decomposition is what you want, 
for some others it's the matrix with the same eigenvectors, but the 
square root of the eigenvalues, etc. etc.

As an important aside, it would be good if the docs addressed this.

Yours,

Andrew


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how is toolchain selected for compiling (OS X with python.org build)?

2015-10-15 Thread Andrew Jaffe
This isn't, strictly speaking, a numpy question, but I suspect it's 
something that numpy devs and users have some insight into.


I am trying to compile an extension that requires a fairly advanced c++ 
compiler. Using the built-in apple python, it defaults to the latest 
clang from apple, and it works just fine.


Using the python.org framework build, it still selects clang, which is 
in principle a new enough compiler, but for some reason it seems to end 
up pointing to /usr/include/c++/4.2.1/ which of course is too old, and 
the build fails.


So the questions I have are:

 - *why* is it using such an old toolchain (I am pretty sure that the 
answer is backward compatibility, and specifically because that is the 
way the framework build python is itself compiled).


 - *how* is it selecting those tools, and in particular, that include 
directory? It doesn't seem to explicitly show up in the logs, until 
there's an error. If I just use the same clang invocation as seems to be 
used by the build, it is able to compile full C++-11 code...


 - Is there any way to still use the apple clang, but in full c++-11 
mode to build extensions?


The solution/workaround is to install and then explicitly select a more 
advanced compiler, e.g., from homebrew, using environment variables, but 
it would be nice if it could work out of the box, and ideally with the 
same behaviour as with apple's python build.


-Andrew

p.s. for the aficionados, this is for [healpy][1], and we're looking at 
it with [this issue][2].


[1]: https://github.com/healpy
[2]: https://github.com/healpy/healpy/issues/284#issuecomment-148354405

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-06-29 Thread Andrew Jaffe

On 28/06/2016 18:50, Ralf Gommers wrote:


On Tue, Jun 28, 2016 at 5:50 PM, Chris Barker mailto:chris.bar...@noaa.gov>> wrote:

> This doesn't really matter too much imho, we have to support 
Accelerate
> either way.

do we? -- so if we go OpenBlas, and someone want to do a simple
build from source, what happens? Do they get accelerate?

Indeed, unless they go through the effort of downloading a separate BLAS
and LAPACK, and figuring out how to make that visible to
numpy.distutils. Very few users will do that.

or would we ship OpenBlas source itself?

Definitely don't want to do that.

or would they need to install OpenBlas some other way?

Yes, or MKL, or ATLAS, or BLIS. We have support for all these, and
that's a good thing. Making a uniform choice for our official binaries
on various OSes doesn't reduce the need or effort for supporting those
other options.

>> >> Faster to fix bugs with good support from main developer.  No
>> >> multiprocessing crashes for Python 2.7.

this seems to be the compelling one.

How does the performance compare?

For most routines performance seems to be comparable, and both are much
better than ATLAS. When there's a significant difference, I have the
impression that OpenBLAS is more often the slower one (example:
https://github.com/xianyi/OpenBLAS/issues/533).


In that case:

 -1

(but this seems so obvious that I'm probably missing the point of the +1s)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] long integers in loadtxt -- bad parsing?

2010-04-08 Thread Andrew Jaffe
Hi all,

I am trying to read some 19-digit integers using loadtxt (or genfromtxt 
-- same problem). The numbers are smaller than the max int64 (and the 
max uint64 -- same problem with either one).

Below, Out[184] shows that python has no problem with the conversion, 
but loadtxt gets the last few digits wrong in Out[185].

Am I doing something stupid?

Yours,

Andrew



In [175]: import numpy as np

In [176]: np.__version__
Out[176]: '1.4.0'

In [177]: from StringIO import StringIO

In [178]: fc = """1621174818000457763
1621209600996363377
1621258907994644735
1621296000994995765
1621374194996298305
"""

In [184]: long(fc[:19])
Out[184]: 1621174818000457763L

In [185]: np.loadtxt(StringIO(fc), dtype=np.int64)
Out[185]:
array([1621174818000457728, 1621209600996363264, 1621258907994644736,
1621296000994995712, 1621374194996298240], dtype=int64)





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long integers in loadtxt -- bad parsing?

2010-04-08 Thread Andrew Jaffe
Hi,

> I am trying to read some 19-digit integers using loadtxt (or genfromtxt
> -- same problem). The numbers are smaller than the max int64 (and the
> max uint64 -- same problem with either one).
>
> Below, Out[184] shows that python has no problem with the conversion,
> but loadtxt gets the last few digits wrong in Out[185].
>
> Am I doing something stupid?
>
> Yours,
>
> Andrew
>
>
>
> In [175]: import numpy as np
>
> In [176]: np.__version__
> Out[176]: '1.4.0'
>
> In [177]: from StringIO import StringIO
>
> In [178]: fc = """1621174818000457763
> 1621209600996363377
> 1621258907994644735
> 1621296000994995765
> 1621374194996298305
> """
>
> In [184]: long(fc[:19])
> Out[184]: 1621174818000457763L
>
> In [185]: np.loadtxt(StringIO(fc), dtype=np.int64)
> Out[185]:
> array([1621174818000457728, 1621209600996363264, 1621258907994644736,
>  1621296000994995712, 1621374194996298240], dtype=int64)

The slightly hack-ish solution is to explicitly use the python long() 
function as a converter:

In [215]: np.loadtxt(StringIO(fc), dtype=np.int64, converters={0:long})
Out[215]:
array([1621174818000457763, 1621209600996363377, 1621258907994644735,
1621296000994995765, 1621374194996298305], dtype=int64)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] read ascii file from complex fortran format() -- genfromtxt

2010-09-21 Thread Andrew Jaffe
Hi all,

I've got an ascii file with a relatively complicated structure, 
originally written by fortran with the format:

 135format(a12,1x,2(f10.5,1x),i3,1x,4(f9.3,1x),4(i2,1x),3x,
  1 16(f7.2,1x),i3,3x,f13.5,1x,f10.5,1x,f10.6,1x,i3,1x,
  2 4(f10.6,1x),
  2 i2,1x,f5.2,1x,f10.3,1x,i3,1x,f7.2,1x,f7.2,3x,4(f7.4,1x),
  3 4(f7.2,1x),3x,f7.2,1x,i4,3x,f10.3,1x,14(f6.2,1x),i3,1x,
  1  3x,2f10.5,8f11.2,2f10.5,f12.3,3x,
  4 2(a6,1x),a23,1x,a22,1x,a22)

Note, in particular, that many of the strings contain white space.

Is there a relatively straightforward way to translate this into  dtype 
(and delimiter?) arguments for use with genfromtxt or do I just have to 
do it by hand?

Andrew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] read ascii file from complex fortran format() -- genfromtxt

2010-09-22 Thread Andrew Jaffe
On 21/09/2010 18:29, David Huard wrote:
> Have you tried
>
> http://code.google.com/p/python-fortranformat/
>
> It's not officially released yet but it's probably worth a try.


Well, I noticed it, bugt the website does say "This is a work in 
progress, a working version is not yet available"!

Andrew


>
> David H.
>
> On Tue, Sep 21, 2010 at 8:25 AM, Andrew Jaffe  <mailto:a.h.ja...@gmail.com>> wrote:
>
> Hi all,
>
> I've got an ascii file with a relatively complicated structure,
> originally written by fortran with the format:
>
>  135format(a12,1x,2(f10.5,1x),i3,1x,4(f9.3,1x),4(i2,1x),3x,
>   1 16(f7.2,1x),i3,3x,f13.5,1x,f10.5,1x,f10.6,1x,i3,1x,
>   2 4(f10.6,1x),
>   2 i2,1x,f5.2,1x,f10.3,1x,i3,1x,f7.2,1x,f7.2,3x,4(f7.4,1x),
>   3 4(f7.2,1x),3x,f7.2,1x,i4,3x,f10.3,1x,14(f6.2,1x),i3,1x,
>   1  3x,2f10.5,8f11.2,2f10.5,f12.3,3x,
>   4 2(a6,1x),a23,1x,a22,1x,a22)
>
> Note, in particular, that many of the strings contain white space.
>
> Is there a relatively straightforward way to translate this into  dtype
> (and delimiter?) arguments for use with genfromtxt or do I just have to
> do it by hand?
>
> Andrew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Computing the norm of an array of vectors

2011-02-08 Thread Andrew Jaffe
On 08/02/2011 16:44, Ben Gamari wrote:
> I have an array of (say, row) vectors,
>
>v = [ [ a1, a2, a3 ],
>  [ b1, b2, b3 ],
>  [ c1, c2, c3 ],
>  ...
>]
>
> What is the optimal way to compute the norm of each vector,
>norm(v)**2 = [
>[ a1**2 + a2**2 + a3**2 ],
>[ b1**2 + b2**2 + b3**2 ],
>...
>  ]
>
> It seems clear that numpy.norm does not handle the multidimensional case
> as needed. The best thing I can think of is to do something like,
>
>sum(v**2, axis=0) * ones(3)

For this shape=(N,3) vector, this is not what you mean: as Robert Kern 
also has it you want axis=1, which produces a shape=(N,) (or the 
[:,newaxis] version which produces shape=(N,1).

But what is the point of the ones(3)? I think you intend to make a new 
(N,3) array where each row duplicates the norm, so that you can then 
divide out the norms. But through the magic of broadcasting, that's not 
necessary:

v/np.sqrt(sum(v**2, axis=1)[:,newaxis])

does what you want.

Andrew


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PEP: named axis

2009-02-11 Thread Andrew Jaffe
Robert Kern wrote:
> On Fri, Feb 6, 2009 at 03:22, Stéfan van der Walt  wrote:
>> Hi Robert
>>
>> 2009/2/6 Robert Kern :
 This could be implemented but would require adding information to the
 NumPy array.
>>> More than that, though. Every function and method that takes an axis
>>> or reduces an axis will need to be rewritten. For that reason, I'm -1
>>> on the proposal.
>> Are you -1 on the array dictionary, or on using it to do axis mapping?
> 
> I'm -1 on rewriting every axis= argument to accept strings. 

Maybe I misunderstand the proposal, but, actually, I think this is 
completely the wrong semantics for "axis="  anyway. "axis=" in numpy 
refers to what is also a dimension,  not a  column.

More generally, then, would we restrict this to labeling only the 
column  dimension, or could it be used for any dimension?



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PEP: named axis

2009-02-11 Thread Andrew Jaffe
Pauli Virtanen wrote:
> Wed, 11 Feb 2009 22:21:30 +0000, Andrew Jaffe wrote:
> [clip]
>> Maybe I misunderstand the proposal, but, actually, I think this is
>> completely the wrong semantics for "axis="  anyway. "axis=" in numpy
>> refers to what is also a dimension,  not a  column.
> 
> I think the proposal was to add the ability to refer to dimensions with 
> names instead of numbers. This is separate from referring to entries in a 
> dimension. (Addressing 'columns' by name is already provided by 
> structured arrays.)
> 

My bad -- I completely misread the proposal!

Nevermind...

Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] pdf for multivariate normal function?

2009-07-23 Thread Andrew Jaffe
Hi,

Charles R Harris wrote:
> 
> On Thu, Jul 23, 2009 at 7:14 AM, per freem  > wrote:
> 
> i'm trying to find the function for the pdf of a multivariate normal
> pdf. i know that the function "multivariate_normal" can be used to
> sample from the multivariate normal distribution, but i just want to
> get the pdf for a given vector of means and a covariance matrix. is
> there a function to do this?
> 
> Well, what does a pdf mean in the multidimensional case? One way to 
> convert the density function into a Stieltjes type measure is to plot 
> the integral over a polytope with one corner at [-inf, -inf,] and 
> the diagonally opposite corner at the plotting point, but the 
> multidimensional display of the result might not be very informative. 
> What do you actually want here?

You are confusing PDF (Probability Density Functions) with CDF 
(Cumulative Density Function), I think. The PDF is well-defined for 
multivariate distributions. It is defined so that P(x) dx is the 
probability to be in the infinitesimal range (x,x+dx).

For a multivariate gaussian, it's

P(x|m, C) = [1/det(2 pi C)] exp{ -1/2 (x-m)^T C^{-1} (x-m) }

in matrix notation, where m is the mean and C is the covariance matrix.

Andrew

















___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] searchsorted for exact matches, not preserving order

2009-09-11 Thread Andrew Jaffe
Dear all,

I've got two (integer) arrays, and I want to find the indices in the 
first one that have entries in the second. I.E. I want all idx s.t. 
there exists a j with a[idx]=b[j]. Here is my current implementation 
(with a = pixnums, b=surveypix)

import numpy as np
def matchPix(pixnums, surveypix):

 spix = np.sort(surveypix)
 ### returns a list of indices into spix to keep spix sorted when 
inserting pixnums
 ss = np.searchsorted(spix, pixnums)

 ss[ss==len(spix)] = 0  ## if any of the pixnums are > max(spix)

 ### now need to extract the actual matches
 idxs = [i for (i,s) in enumerate(ss) if pixnums[i]==spix[s]]

 return np.asarray(idxs)

This works, and is pretty efficient, but to me this actually seems like 
a more common task than searchsorted itself; is there a simpler, more 
numpyish way to do this?

Andrew

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] searchsorted for exact matches, not preserving order

2009-09-11 Thread Andrew Jaffe
On 11/09/2009 08:33, Robert Kern wrote:
> On Fri, Sep 11, 2009 at 10:24, Andrew Jaffe  wrote:
>> Dear all,
>>
>> I've got two (integer) arrays, and I want to find the indices in the
>> first one that have entries in the second. I.E. I want all idx s.t.
>> there exists a j with a[idx]=b[j]. Here is my current implementation
>> (with a = pixnums, b=surveypix)
>
> numpy.setmember1d() [or numpy.in1d() for the SVN trunk of numpy].
>
Robert,

Thanks. But in fact this fails for my (possibly corner or edge) case: 
when the first array has duplicates that, in fact, are not in the second 
array, indices corresponding to those entries get returned. In general, 
duplicates are not necessarily treated right by this algorithm, I don't 
think.

I can understand that this may be a feature, not a bug, but in fact for 
my use-case I want the algorithm to return the indices corresponding to 
all entries in ar1 with the same value, if that value appears anywhere 
in ar2.

Yours,

Andrew



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion