Re: [Numpy-discussion] Equality not working as expected with ndarray sub-class

2013-07-04 Thread sebastian
On 2013-07-04 15:06, Thomas Robitaille wrote:
> Hi everyone,
> 
> The following example:
> 
> import numpy as np
> 
> class SimpleArray(np.ndarray):
> 
> __array_priority__ = 1
> 
> def __new__(cls, input_array, info=None):
> return np.asarray(input_array).view(cls)
> 
> def __eq__(self, other):
> return False
> 
> a = SimpleArray(10)
> print (np.int64(10) == a)
> print (a == np.int64(10))
> 
> gives the following output
> 
> $ python2.7 eq.py
> True
> False
> 
> so that in the first case, SimpleArray.__eq__ is not called. Is this a
> bug, and if so, can anyone think of a workaround? If this is expected
> behavior, how do I ensure SimpleArray.__eq__ gets called in both
> cases?
> 

This should be working in all development versions. I.e. NumPy >1.7.2 
(which is not released yet).

- Sebastian

> Thanks,
> Tom
> 
> ps: cross-posting to stackoverflow
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Allowing slices as arguments for ndarray.take

2014-01-17 Thread sebastian
On 2014-01-17 00:28, Stephan Hoyer wrote:
> There was a discussion last year about slicing along specified axes in
> numpy arrays:
> http://mail.scipy.org/pipermail/numpy-discussion/2012-April/061632.html
> [1]
> 
> I'm finding that slicing along specified axes is a common task for me
> when writing code to manipulate N-D arrays.
> 
> The method ndarray.take basically does what I would like, except it
> cannot take slice objects as argument. In the mean-time, I've written
> a little helper function:
> 
> def take(a, indices, axis):
>     index = [slice(None)] * a.ndim
>     index[axis] = indices
>     return a[tuple(index)]
> 
> Is there support for allowing the `indices` argument to `take` to take
> Python slice objects as well as arrays? That would alleviate the need
> for my helper function.
> 
> Cheers,
> Stephan
> 
> 
> 
> Links:
> --
> [1] 
> http://mail.scipy.org/pipermail/numpy-discussion/2012-April/061632.html
> 
> 
Hey,

Personally, I am not sure that generalizing take is the right approach. 
Take is currently orthogonal to indexing implementation wise and has 
some smaller differences. Given a good idea for the api, I think a new 
function maybe better. Since I am not on a computer at the moment I did 
not check the old discussions though.

- Sebastian
___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Deprecation of boolean substract and negative (the - operator)

2014-01-30 Thread sebastian
Hey all,

recently we had a small discussion about deprecating some of the 
operators for boolean arrays. This discussion seemed to have ended by 
large in the consense that while most boolean operators are well defined 
and should be kept, the `-` one is not very well defined on boolean 
arrays and has the problem of the inconsistency:

- np.array(False) == True
False - np.array(False) == False
# leading to:
False - (-np.arry(False)) != False + np.array(False)

So that it is preferable to use one of the binary operators for this 
operation.

For now this would only be a deprecation, but both operators are 
probably used out there. So if you have any serious doubt about starting 
this deprecation please note it here.

The Pull request to implement such a deprecation is: 
https://github.com/numpy/numpy/pull/4105

Regards,

Sebastian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Request for enhancement to numpy.random.shuffle

2014-10-12 Thread Sebastian

On 2014-10-12 16:54, Warren Weckesser wrote:
>
>
> On Sun, Oct 12, 2014 at 7:57 AM, Robert Kern  <mailto:robert.k...@gmail.com>> wrote:
>
> On Sat, Oct 11, 2014 at 11:51 PM, Warren Weckesser
> mailto:warren.weckes...@gmail.com>>
> wrote:
>
> > A small wart in this API is the meaning of
> >
> >   shuffle(a, independent=False, axis=None)
> >
> > It could be argued that the correct behavior is to leave the
> > array unchanged. (The current behavior can be interpreted as
> > shuffling a 1-d sequence of monolithic blobs; the axis argument
> > specifies which axis of the array corresponds to the
> > sequence index.  Then `axis=None` means the argument is
> > a single monolithic blob, so there is nothing to shuffle.)
> > Or an error could be raised.
> >
> > What do you think?
>
> It seems to me a perfectly good reason to have two methods instead of
> one. I can't imagine when I wouldn't be using a literal True or False
> for this, so it really should be two different methods.
>
>
>
> I agree, and my first inclination was to propose a different method
> (and I had the bikeshedding conversation with myself about the name:
> "disarrange", "scramble", "disorder", "randomize", "ashuffle", some
> other variation of the word "shuffle", ...), but I figured the first
> thing folks would say is "Why not just add options to shuffle?"  So,
> choose your battles and all that.
>
> What do other folks think of making a separate method
I'm not a fan of more methods with similar functionality in Numpy. It's
already hard to overlook the existing functions and all their possible
applications and variants. The axis=None proposal for shuffling all
items is very intuitive.

I think we don't want to take the path of matlab: a huge amount of
powerful functions, but few people know of their powerful possibilities.

regards,
Sebastian


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] #2522 numpy.diff fails on unsigned integers

2014-11-04 Thread Sebastian
On 2014-11-04 15:06, Todd wrote:
> On Tue, Nov 4, 2014 at 2:50 PM, Sebastian Wagner  <mailto:se...@sebix.at>> wrote:
>
> Hello,
>
> I want to bring up Issue #2522 'numpy.diff fails on unsigned integers
> (Trac #1929)' [1], as it was resonsible for an error in one of our
> programs. Short explanation of the bug: np.diff performs a subtraction
> on the input array. If this is of type uint and the data contains
> falling data, it results in an artihmetic underflow.
>
> >>> np.diff(np.array([0,1,0], dtype=np.uint8))
> array([  1, 255], dtype=uint8)
>
> @charris proposed either
> - a note to the doc string and maybe an example to clarify things
> - or raise a warning
> but with a discussion on the list.
>
> I would like to start it now, as it is an error which is not easily
> detectable (no errors or warnings are thrown). In our case the
> type of a
> data sequence, with only zeros and ones, had type f8 as also every
> other
> one, has been changed to u4. As the programs looked for values ==1 and
> ==-1, it broke silently.
> In my opinion, a note in the docs is not enough and does not help
> if the
> type changed or set after the program has been written.
> I'd go for automatic upcasting of uints by default and an option
> to turn
> it off, if this behavior is explicitly wanted. This wouldn't be
> correct
> from the point of view of a programmer, but as most of the users
> have a
> scientific background who excpect it 'to work', instead of sth is
> theoretically correct but not convenient. (I count myself to the first
> group)
>
>
>
> When you say "automatic upcasting", that would be, for example uint8
> to int16?  What about for uint64?  There is no int128.
The upcast should go to the next bigger, otherwise it would again result
in wrong values. uint64 we can't do that, so it has to stay.
> Also, when you say "by default", is this only when an overflow is
> detected, or always?
I don't know how I could detect an overflow in the diff-function. In
subtraction it should be possible, but that's very deep in the
numpy-internals.
> How would the option to turn it off be implemented?  An argument to
> np.diff or some sort of global option?
I thought of a parameter upcast_int=True for the function.
> -- 
> gpg --keyserver keys.gnupg.net --recv-key DC9B463B

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] #2522 numpy.diff fails on unsigned integers

2014-11-12 Thread Sebastian
On 2014-11-04 19:44, Charles R Harris wrote:
> On Tue, Nov 4, 2014 at 11:19 AM, Sebastian  wrote:
>
>> On 2014-11-04 15:06, Todd wrote:
>>> On Tue, Nov 4, 2014 at 2:50 PM, Sebastian Wagner >
>>> <mailto:se...@sebix.at>> wrote:
>>>
>>> Hello,
>>>
>>> I want to bring up Issue #2522 'numpy.diff fails on unsigned
>> integers
>>> (Trac #1929)' [1], as it was resonsible for an error in one
>> of our
>>> programs. Short explanation of the bug: np.diff performs a
>> subtraction
>>> on the input array. If this is of type uint and the data
>> contains
>>> falling data, it results in an artihmetic underflow.
>>>
>>> >>> np.diff(np.array([0,1,0], dtype=np.uint8))
>>> array([ 1, 255], dtype=uint8)
>>>
>>> @charris proposed either
>>> - a note to the doc string and maybe an example to clarify
>> things
>>> - or raise a warning
>>> but with a discussion on the list.
>>>
>>> I would like to start it now, as it is an error which is not
>> easily
>>> detectable (no errors or warnings are thrown). In our case
>> the
>>> type of a
>>> data sequence, with only zeros and ones, had type f8 as also
>> every
>>> other
>>> one, has been changed to u4. As the programs looked for
>> values ==1 and
>>> ==-1, it broke silently.
>>> In my opinion, a note in the docs is not enough and does not
>> help
>>> if the
>>> type changed or set after the program has been written.
>>> I'd go for automatic upcasting of uints by default and an
>> option
>>> to turn
>>> it off, if this behavior is explicitly wanted. This wouldn't
>> be
>>> correct
>>> from the point of view of a programmer, but as most of the
>> users
>>> have a
>>> scientific background who excpect it 'to work', instead of
>> sth is
>>> theoretically correct but not convenient. (I count myself to
>> the first
>>> group)
>>>
>>>
>>>
>>> When you say "automatic upcasting", that would be, for example
>> uint8
>>> to int16? What about for uint64? There is no int128.
>> The upcast should go to the next bigger, otherwise it would again
>> result
>> in wrong values. uint64 we can't do that, so it has to stay.
>>> Also, when you say "by default", is this only when an overflow is
>>> detected, or always?
>> I don't know how I could detect an overflow in the diff-function.
>> In
>> subtraction it should be possible, but that's very deep in the
>> numpy-internals.
>>> How would the option to turn it off be implemented? An argument
>> to
>>> np.diff or some sort of global option?
>> I thought of a parameter upcast_int=True for the function.
>
> Could check for non-decreasing sequence in the unsigned case. Note
> that differences of signed integers can also overflow. One way to
> check in general is to determine the expected sign using comparisons.

I think you mean a decreasing/non-increasing instead of non-decreasing
sequence?
It's also the same check as checking for a sorted sequence. But I
currently don't know how I could do that efficiently without np.diff in
Python, in Cython it should be easily possible.


np.gradient has the same problem:
>>> np.random.seed(89)
>>> d = np.random.randint(0,2,size=10).astype(np.uint8); d
array([1, 0, 0, 1, 0, 1, 1, 0, 0, 0], dtype=uint8)
>>> np.diff(d)
array([255,   0,   1, 255,   1,   0, 255,   0,   0], dtype=uint8)
>>> np.gradient(d)
array([ 255. ,  127.5,0.5,0. ,0. ,0.5,  127.5,  127.5,
  0. ,0. ])

---
gpg --keyserver keys.gnupg.net --recv-key DC9B463B
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about dtype

2014-12-13 Thread Sebastian
Hi,

I'll just comment on the creation of your dtype:

> dt = [(">> dt = [(">> dty = np.dtype(dt)
>>> dty.names

('>> dt = [(">> dty = np.dtype(('>> dty.names

('f0', 'f1')
>>> dty.descr

[('f0', 'http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy pickling problem - python 2 vs. python 3

2015-03-06 Thread Sebastian

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

As this also affects .npy files, which uses pickle internally, why can't
this be done by Numpy itself? This breaks backwards compatibility in a
very bad way in my opinion.

The company I worked for uses Numpy and consorts a lot and also has many
data in .npy and pickle files. They currently work with 2.7, but I also
tried to develop my programs to be compatible with Py 3. But this was
not possible when it came to the point of dumping and loading npy files.
I think this will be major reason why people won't take the step forward
to Py3 and Numpy is not considered to be compatible to Python 3.

just my 5 cents,
Sebastian

On 03/06/2015 04:37 PM, Ryan Nelson wrote:
> Arnd,
>
> I can see where this is an issue. If you are trying to update your
code for Py3, I still think that it would really help to add a version
attribute of some sort to your new HDF files. You can then write a
little check in your access code that looks for this variable. If it is
not present, you know that it is an old file, and you can use the trick
that I gave you. Otherwise, it will process the file as normal. It could
even throw a little error saying that the file is outdated. You could
write a small conversion script that could run through old files and
reprocess them into the new format. Fortunately, Python is pretty good
at automating tasks, even for hundreds of files :)
> It might be informative to ask at the PyTables list to see what
they've done. The Pandas folks also do a lot with HDF files, and they
have certainly worked their way through the Py2-3 transition. Also,
because this is an issue with Python pickle, a quick note on SO might
get some hits. I tried your script using a lists of list, rather than a
list of arrays, and the same problem still persists, so as Pauli notes
this is going to be a problem regardless of the type of attributes you
set, I think your just going to have to hard code some kind of check in
your code to switch behavior. I recently switched to using Py3
exclusively, and although it was painful at first, I'm quite happy with
Py3 overall. I also use the Anaconda Python distribution, which makes it
very easy to have Py2 and Py3 environments if you need to switch back
and forth.
> Sorry if that doesn't help much. Just some thoughts from my recent
conversion experiences.
>
> Ryan
>
>
>
> On Fri, Mar 6, 2015 at 9:48 AM, Arnd Baecker mailto:arnd.baec...@web.de>> wrote:
>
> On Fri, 6 Mar 2015, Pauli Virtanen wrote:
>
> > Arnd Baecker  web.de <http://web.de>> writes:
> > [clip]
> >> Still I would have thought that this should be working
out-of-the box,
> >> i.e. without the pickle.loads trick?
> >
> > Pickle files should be considered incompatible between Python 2
and Python 3.
> >
> > Python 3 interprets all bytes objects saved by Python 2 as str
and attempts
> > to decode them under some unicode locale. The default locale is
ASCII, so it
> > will simply just fail in most cases if the files contain any
binary data.
> >
> > Failing by default is also the right thing to do, since the
saved bytes
> > objects might actually represent strings in some locale, and
ASCII is the
> > safest guess.
> >
> > This behavior is that of Python's pickle module, and does not
depend on Numpy.
>
> Thank's a lot for the explanation!
>
> So what is then the recommded way to save data under python 2 so that
> they can still be loaded under python 3?
>
> For example using np.save with a list of arrays works fine
> either on python 2 or on python 3.
> However it does not work if one tries to open under python 3
> a file generated before on python 2.
> (Again, because pickle is involved internally
>"python3.4/site-packages/numpy/lib/npyio.py",
>line 393, in load  return format.read_array(fid)
>File "python34/lib/python3.4/site-packages/numpy/lib/format.py",
>line 602, in read_array  array = pickle.load(fp)
>UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0  ...
>
> Just to be clear: I don't want to beat a dead horse here - for my
usage
> via pytables I was able to solve the loading of old files following
> Ryan's solutions. Personally I don't use .npy files.
> Maybe saving a list containing arrays is an unusual example ...
>
> Still, I am a little bit worried about backwards-compatibility:
> being able to load old data files is an important issue
> as by this it is possible to check whether current code still
> reproduces previously obtained (maybe also published) results.
>
>

Re: [Numpy-discussion] Installation on Windows

2015-03-20 Thread Sebastian

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

as you ask how to install Numpy and not how to compile it, I guess you
are looking for a so called distribution. A distribution bundles
pre-compiled packages of Numpy and others together for simple usage of
Numpy. Otherwise you have to compile it yourself with various
dependencies. That's easy to accomplish. Have a look at

https://winpython.github.io/
https://code.google.com/p/pythonxy/
http://docs.continuum.io/anaconda/

regards,
Sebastian

On 03/20/2015 09:45 AM, Per Tunedal wrote:
> Hi,
> how do I install Numpy on Windows? I've tried the setup.py file, but get
> an error message:
>
> setup.py install
>
> gives:
> No module named msvccompiler in numpy.distutils; trying from distutils
> error: Unable to find vcvarsall.bat
>
> Yours,
> Per Tunedal
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> --
> python programming - mail server - photo - video - https://sebix.at
> To verify my cryptographic signature or send me encrypted mails, get my
> key at https://sebix.at/DC9B463B.asc and on public keyservers.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBCAAGBQJVDDXtAAoJEBn0X+vcm0Y7WeIP/An4PfdtfAQBMKPuUmFoLsfO
mskvmdciJl7K7rGucvd1jJWGuuaarILziYjCQk7ZeWd/uvC8c7iA4H6T2PgA0CuP
tsWfRpNNy56C7I6lo0b4l3l4o4QM84H/S9qKL5Qsnygl9BeFQxyAKspgwxWUmKXk
6V5YqCkF/91Qbeb8MTO6Gc4a8cG+H7xo1OEuOBC1qummU/f4UoaIwk1WXX3AeYaO
Jun3ZNv6yB0mk94iQzIiccQmWz3T9F+Z0TawXg5otLgsCqpNd0GEtLV/MWmBU5HN
zgQ7Uhmz9bmypSEx1UPF1L8NHOVD0VdoUCFy4tzECi7RqcVxxTJ1dwqZOFFQaqAk
F6m3K4HTfvfhSaSZR9pIgtP0sVyis44R1Vox24IDZH6LKCpt6GnWcCxbZfCUQW67
9OEs/YP3yeH1VRY70soGmkexFc7a7ssy6nyuAN1MXSX+uxJDsr674gklqV1i8Yxm
Et8hLDG084Bh7aaq4Xppz3kXNOLDX3+RClXJjOR0qyxzNqSdJBzgABmY83GDV2DS
e7iV0IJYIBzBpU9tok3KRsYky/cKMkagx75MQKgWLqsmfSD+gutmEscgIKIJXCMx
rt1NN46OODR9KMjoK+9k80GILEbU9gwsw61jrj0KaH+032tZemeMgN8GlkpTiTbW
eomkdUii20Cjp3x+Jdvh
=JGhA
-END PGP SIGNATURE-


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removal of Deprecated Keywords/functionality

2015-06-21 Thread Sebastian

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

> Note that the "skiprows" keyword is still used in loadtxt. It should probably 
> be deprecated
there for consistency, but it is possible that some use it as a
positional argument.

skiprows is t the only argument of loadtxt that allows skipping a header
or other data at the beginning, which do not start with #. This is often
the case with data from measurement device and software. Sometimes these
lines are also used to give informations about the circumstances or the
probe in a non-CSV and non-tab-separated style.

Sebastian,

> -- 
> python programming - mail server - photo - video - https://sebix.at
> To verify my cryptographic signature or send me encrypted mails, get my
> key at https://sebix.at/DC9B463B.asc and on public keyservers.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBCAAGBQJVhoKrAAoJEBn0X+vcm0Y7QV0P/1Yvi3BXGHumFgjCu+dLqez7
y9rLjRDc5tgTXN0LFYsUcnRcrcwLrkDS2q95upy0HXGI+sYQKAfBvpCWcjTht657
VWCcS71jXZNU0YCwumTUEi815I2jGSV1WA2t6ckfCMiw19ePcYNSHbw4qbHHdxuw
ZEncX8kJZ6/fKrY+0F2HJo0CGp+Wmn1f6Jzk/5sjaRFdy/g8GE9Txcrfr+i63uWt
w1BGBziHkCR15AHS/LFs9/lWOPmfeoW8Wz+qErZ4m75WECOjbSXSOVaaWIaKWb9Z
mkoVyt+0OSoTI0coUqkrl2Cju0vkSK7i+3+uM9dHcqkhlgNuFhHGoamtdg8yHrl8
RYMXgo7R0cZ2n6IJnS49vmbXiC5YeTlQ1HWeU+H2ZqJ00ZGNQBClrwhcNt6STCxP
8R1tp2UnmEYJq7JTtVppCLxowvPjOIL0K9xkCLEsM+AlEQq+e4RFMOgtAo5ptqZy
kPgP9GWbMY160g4DirWn9VZdfzb3Jyh9tI0r8mL4uCzsrBZHqVgO8K8p2Gth6NsO
3fW2oGdSbRGLD/DlmK7h8X7VqrffcUCi1D21ZEmzZ/Yey9YxaEkhXw9H3dPC9AwD
f1UGvAGrTz5nT7x9gctEJWyYCN0QTwuq1z0PV8qbn6/UG/ujv+OVLvXtZ5jTySvq
YF6Ylhk1Bza4A5eiqXBw
=Jb9N
-END PGP SIGNATURE-


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] floats for indexing, reshape - too strict ?

2015-07-04 Thread Sebastian

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,


On 07/02/2015 03:37 PM, Antoine Pitrou wrote:
>
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: list indices must be integers, not float
>
>
> I don't think relaxing type checking here makes any good.
Python is also strong-typed which means that types are never converted
silently. I think a library should follow the behavior of the language.

https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic%20language%20and%20also%20a%20strongly%20typed%20language

Sebastian

- -- 
python programming - mail server - photo - video - https://sebix.at
To verify my cryptographic signature or send me encrypted mails, get my
key at https://sebix.at/DC9B463B.asc and on public keyservers.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBCAAGBQJVl4qcAAoJEBn0X+vcm0Y7foUP/3CWITL83aoqd+H1JHXLUft4
TbGfW54TkBCLIJt3/DxyNACKpNPBEbanCL/DgW6OHZ2qTfRbYF6wDtPB39a5qK5s
v3+z4nUwkTVVUNDNEoyQ3C8VfCaGwNnSGUoBAhoL9Y1njdSqbHJZa+dNNJ3P289D
SThXDJ9bmV5AKmpdjPxZYDtcDkmoO+0SiBeCC4OhZY7Jf29VdICzWpnLbeZGSSSi
OVleHOxYL1BoA4chKFtjhm6Arkrlp/485erXWtuFTt8V3elruTQRLlbty5wzNVf1
EaTW31nDEJFJxd+9aOsopPgLGlwQZ01LkVA2JSeNV57OisqTIEs5MsmG+vN2X2w1
++I/IFHQpDHqE6nsKMzFPDzdA3vlEgoYY9J8bzqqzLHFdSgpR+nhUTjJmW6uNuU4
NuqDq7NDh9BlNhRG9ZDD/JktpOjDrbfBhOvx7V+WoAIZQ3b+WHwkNR78+i6KFGdC
uxQa8SAkaEpBMPDe55l8zQFuUI2jGsbr1y6LF7EtInjaY562vij0Jte0LzNWEqVP
XkRPwHlZ/ZVAVR5IJ083Z+S9uT+BYzoLWOxInxGLes606Mf+xx6LM7f5vE4h0MCJ
Fsv5c2vRTbBybIKrAGHo6j+DT0WDt87u/UnSZg7ehpDZslaRThw+zJ+gBs4WqUEI
3m2Hk1hHXnRx5E9KmcNg
=wNdm
-END PGP SIGNATURE-


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-08 Thread Sebastian
On 12/08/2015 02:17 AM, Warren Weckesser wrote:
> On Sun, Dec 6, 2015 at 6:55 PM, Allan Haldane  <mailto:allanhald...@gmail.com>> wrote:
>
> It has also crossed my mind that np.random.randint and
> np.random.rand could use an extra 'dtype' keyword.
>
> +1.  Not a high priority, but it would be nice.
Opened an issue for this: https://github.com/numpy/numpy/issues/6790
> Warren
Sebastian



signature.asc
Description: OpenPGP digital signature
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: stop providing official win32 downloads (for now)

2015-12-22 Thread Sebastian
Hi,

On 12/22/2015 08:11 PM, Chris Barker wrote:
> Any way to know how many people are running 32 bit Python on Windows
> these days??

Approximately 25% of total Winpython downloads are 32bit. Exact numbers
depend on the release and python version. Python 2.7 support has been
dropped already, last release with 2.7 was in October.

More details on download rates (but unfortunately without absolute
numbers) here:
http://sourceforge.net/projects/winpython/files/

Sebastian

-- 
python programming - mail server - photo - video - https://sebix.at
To verify my cryptographic signature or send me encrypted mails, get my
key at https://sebix.at/DC9B463B.asc and on public keyservers.




signature.asc
Description: OpenPGP digital signature
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bump warning stacklevel

2016-01-27 Thread sebastian

On 2016-01-27 21:01, Ralf Gommers wrote:

On Wed, Jan 27, 2016 at 7:26 PM, Sebastian Berg
 wrote:


Hi all,

in my PR about warnings suppression, I currently also have a commit
which bumps the warning stacklevel to two (or three), i.e. use:

warnings.warn(..., stacklevel=2)

(almost) everywhere. This means that for example (take only the
empty
warning):

np.mean([])

would not print:

/usr/lib/python2.7/dist-packages/numpy/core/_methods.py:55:
RuntimeWarning: Mean of empty slice.
warnings.warn("Mean of empty slice.", RuntimeWarning)

but instead print the actual `np.mean([])` code line (the repetition
of
the warning command is always a bit funny).

The advantage is nicer printing for the user.

The disadvantage would probably mostly be that existing warning
filters
that use the `module` keyword argument, will fail.

Any objections/thoughts about doing this change to try to better
report
the offending code line?


This has annoyed me for a long time, it's hard now to figure out where
warnings really come from. Especially when running something large
like scipy.test(). So +1.


Frankly, I am not sure whether there might be
a python standard about this, but I would expect that for a library
such as numpy, it makes sense to change. But, if downstream uses
warning filters with modules, we might want to reconsider for
example.


There probably are usages of `module`, but I'd expect that it's used a
lot less than `category` or `message`. A quick search through the
scipy repo gave me only a single case where `module` was used, and
that's in deprecated weave code so soon the count is zero.  Also, even
for relevant usage, nothing will break in a bad way - some more noise
or a spurious test failure in numpy-using code isn't the end of the
world I'd say.

One issue will be how to keep this consistent. `stacklevel` is used so
rarely that new PRs will always omit it for new warnings. Will we just
rely on code review, or would a private wrapper around `warn` to use
inside numpy plus a test that checks that the wrapper is used
everywhere be helpful here?



Yeah, I mean you could add tests for the individual functions in 
principle.
I am not sure if adding an alias helps much, how are we going to test 
that
warnings.warn is not being used? Seems like quite a bit of voodoo 
necessary

for that.

- Sebastian




Ralf


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x80000000, dtype='b1')

2009-09-11 Thread Sebastian
Hello,

The folks at stsci (Jim T.) are not able to reproduce this error with
1.4.0.dev7362 so I guess there is something wrong with my numpy
installation.
I also tried '1.4.0.dev7362' and numpy1.3 (stable) but alas, the same error!

My system:
[r...@siate numpy]# uname -a
Linux siate.iate.oac.uncor.edu 2.6.27.25-78.2.56.fc9.x86_64 #1 SMP Thu Jun
18 12:24:37 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

Below is the numpy error and the the output of numpy, build and install as
attached:
python setup.py build > build.txt
python setup.py install > install.txt

Can't work out what is going wrong here. I don't think I'm missing some
dependency nor mixing compilers, but maybe I'm wrong, any hints?
best regards,
- Sebastian Gurovich


[r...@siate soft]# ipython
Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:56)
Type "copyright", "credits" or "license" for more information.

IPython 0.8.3 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help  -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [6]: numpy.__version__
Out[6]: '1.4.0.dev7375'

In [7]: a=numpy.zeros(0x8000,dtype='b1')

In [8]: a.data
---
ValueErrorTraceback (most recent call last)

/home/sebaguro/Desktop/soft/ in ()

ValueError: size must be zero or positive
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2_7375
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /usr/local/lib64
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib64
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /usr/local/lib64
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib64/atlas
  libraries f77blas,cblas,atlas not found in /usr/lib64/sse2
  libraries f77blas,cblas,atlas not found in /usr/lib64
  libraries f77blas,cblas,atlas not found in /usr/lib
  NOT AVAILABLE

blas_info:
  libraries blas not found in /usr/local/lib64
  libraries blas not found in /usr/local/lib
  FOUND:
libraries = ['blas']
library_dirs = ['/usr/lib64']
language = f77

  FOUND:
libraries = ['blas']
library_dirs = ['/usr/lib64']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /usr/local/lib64
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib64
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
  libraries lapack_atlas not found in /usr/local/lib64
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas
  libraries lapack_atlas not found in /usr/lib64/atlas
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2
  libraries lapack_atlas not found in /usr/lib64/sse2
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib64
  libraries lapack_atlas not found in /usr/lib64
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /usr/local/lib64
  libraries lapack_atlas not found in /usr/local/lib64
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib64/atlas
  libraries lapack_atlas not found in /usr/lib64/atlas
  libraries f77blas,cblas,atlas not found in /usr/lib64/sse2
  libraries lapack_atlas not found in /usr/lib64/sse2
  libraries f77blas,cblas,atlas not found in /usr/lib64
  libraries lapack_atlas not found in /usr/lib64
  libraries f77blas,cblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

lapack_info:
  libraries lapack n

Re: [Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x80000000, dtype='b1')

2009-09-14 Thread Sebastian
Thanks for the help.
I think that deleting the old build directory before rebuilding may have
been the trick.
The output below shows i'm no longer reproducing the error.
best wishes,
- Sebastian Gurovich

In [3]: numpy.__version__
Out[3]: '1.3.0'
In [4]: a=numpy.zeros(0x8000,dtype='b1')
In [5]: a.data
Out[5]:  wrote:

>
>
> 2009/9/13 Nadav Horesh 
>
>>
>> Could it be a problem of python version? I get no error with python2.6.2
>> (on amd64 gentoo)
>>
>>  Nadav
>>
>> -הודעה מקורית-
>> מאת: numpy-discussion-boun...@scipy.org בשם David Cournapeau
>> נשלח: א 13-ספטמבר-09 09:48
>> אל: Discussion of Numerical Python
>> נושא: Re: [Numpy-discussion] 64-bit Fedora 9 a=numpy.zeros(0x8000,
>> dtype='b1')
>>
>> Charles R Harris wrote:
>> >
>> >
>> > On Sat, Sep 12, 2009 at 9:03 AM, Citi, Luca > > <mailto:lc...@essex.ac.uk>> wrote:
>> >
>> > I just realized that Sebastian posted its 'uname -a' and he has a
>> > 64bit machine.
>> > In this case it should work as mine (the 64bit one) does.
>> > Maybe during the compilation some flags prevented a full 64bit
>> > code to be compiled?
>> > __
>> >
>> >
>> > Ints are still 32 bits on 64 bit machines, but the real question is
>> > how python interprets the hex value.
>>
>>
>> That's not a python problem: the conversion of the object to a C
>> int/long happens in numpy (in PyArray_IntpFromSequence in this case). I
>> am not sure I understand exactly what the code is doing, though. I don't
>> understand the rationale for #ifdef/#endif in the one item in shape
>> tuple case (line 521 and below), as well as the call to PyNumber_Int,
>>
>>
> Possibly, I get
>
> In [1]: a=numpy.zeros(0x8000,dtype='b1')
> ---
> ValueErrorTraceback (most recent call last)
>
> /home/charris/ in ()
>
> ValueError: Maximum allowed dimension exceeded
>
> This on 32 bit fedora 11 with  python 2.6. Hmm, "maximum allowed size
> exceeded" might be a better message.
>
> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Sebastian
Did you try using the parameter range?
I do something like this.
regards

ax = fig.add_subplot(1,1,1)
> pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
> n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
> range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
> normed=1, facecolor='y', alpha=0.5)
> ax.set_xlabel(r'\Large$ \rm{values}$')
> ax.set_ylabel(r'\Large Delatavalue/Value')
>
>
> gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
>
> gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
> pylab.plot(gausx,gaus, color='red', lw=2)
> ax.set_xlim(-1.5, 1.5)
> ax.grid(True)
>

On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
wrote:

> josef.p...@gmail.com wrote:
> > On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold 
> wrote:
>
> >>  This kind of info might be useful to other newcomers
> >> somewhere...  ?  Thoughts on
> >> posting this on the wiki here?
> >
> > I also agree. It will improve with the newly redesigned website for
> scipy.org
> > However, I cannot find the link right now for the development version of
> > the new website.
>
> Feel free to crib whatever you want from my post for that -- or suggest
> a place for me to put it, and I'll do it. I'm just not sure where it
> should go at this point.
>
> -Chris
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Sebastian
Hi Chris, yeah there should, try the following:
import numpy
import matplotlib.pyplot as pylab
regards

On Fri, Nov 27, 2009 at 8:47 PM, Wayne Watson
wrote:

> I tried this and it put ranges on y from 0 to 0.45 and x from 5 to 50.
>
> import numpy as np
> import pylab
>
> v = np.array([20, 15,10,30, 50, 30, 20, 25, 10])
> #Plot a normalized histogram
> print np.linspace(0,50,10)
> pylab.hist(v, normed=1, bins=np.linspace(0,9,10), range=(0,100))
> pylab.show()
>
> I  added the two imports. I got a fig error on the first line.
> import pylab
> import numpy
>
> Shouldn't there by a pylab.Show in there?
>
> ax = fig.add_subplot(1,1,1)
> pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
> n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
>
> range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
> normed=1, facecolor='y', alpha=0.5)
> ax.set_xlabel(r'\Large$ \rm{values}$')
> ax.set_ylabel(r'\Large Delatavalue/Value')
>
>
> gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
>
> gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
> pylab.plot(gausx,gaus, color='red', lw=2)
> ax.set_xlim(-1.5, 1.5)
> ax.grid(True)
>
> Sebastian wrote:
> > Did you try using the parameter range?
> > I do something like this.
> > regards
> >
> > ax = fig.add_subplot(1,1,1)
> > pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
> > n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
> >
> range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
> > normed=1, facecolor='y', alpha=0.5)
> > ax.set_xlabel(r'\Large$ \rm{values}$')
> > ax.set_ylabel(r'\Large Delatavalue/Value')
> >
> >
> gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
> >
> gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
> > pylab.plot(gausx,gaus, color='red', lw=2)
> > ax.set_xlim(-1.5, 1.5)
> > ax.grid(True)
> >
> >
> > On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
> > mailto:chris.bar...@noaa.gov>> wrote:
> >
> > josef.p...@gmail.com <mailto:josef.p...@gmail.com> wrote:
> > > On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
> > mailto:jsseab...@gmail.com>> wrote:
> >
> > >>  This kind of info might be useful to other newcomers
> > >> somewhere...  <http://www.scipy.org/History_of_SciPy>?  Thoughts
> on
> > >> posting this on the wiki here?
> > >
> > > I also agree. It will improve with the newly redesigned website
> > for scipy.org <http://scipy.org>
> > > However, I cannot find the link right now for the development
> > version of
> > > the new website.
> >
> > Feel free to crib whatever you want from my post for that -- or
> > suggest
> > a place for me to put it, and I'll do it. I'm just not sure where it
> > should go at this point.
> >
> > -Chris
> >
> >
> > --
> > Christopher Barker, Ph.D.
> > Oceanographer
> >
> > Emergency Response Division
> > NOAA/NOS/OR&R(206) 526-6959   voice
> > 7600 Sand Point Way NE   (206) 526-6329   fax
> > Seattle, WA  98115   (206) 526-6317   main reception
> >
> > chris.bar...@noaa.gov <mailto:chris.bar...@noaa.gov>
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
> > 
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
> --
>   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)
>
> (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
>  Obz Site:  39° 15' 7" N, 121° 2' 32" W, 2700 feet
>
>   350 350 350 350 350 350 350 350 350 350
> Make the number famous. See 350.org
>The major event has passed, but keep the number alive.
>
>Web Page: 
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-28 Thread Sebastian
On Sat, Nov 28, 2009 at 1:01 AM,  wrote:

> On Fri, Nov 27, 2009 at 9:44 PM, Wayne Watson
>  wrote:
> > Joseph,
> > That got it by the fig problem but there is yet another one. value is
> > not defined on the very long line:
> > range = ...
> >Wayne
>
> (values is the data array, ... no idea about
> scientificstat.standardDeviation)
>
> Sebastian's example is only part of a larger script that defines many
> of the variables and functions that are used.
>
> If you are not yet familiar with these examples, maybe you look at the
> self contained examples in the matplotlib docs. At least that's what I
> do when I only have a rough idea about what graph I want to do but
> don't know how to do it with matplotlib. I usually just copy a likely
> looking candidate and change it until it (almost)  produces what I
> want.
> For example look at histogram examples in
>
> http://matplotlib.sourceforge.net/examples/index.html
>
> Josef
>
>
> > josef.p...@gmail.com wrote:
> >> On Fri, Nov 27, 2009 at 9:05 PM, Sebastian  wrote:
> >>
> >> ...
> >> you need to create a figure, before you can use it
> >>
> >> fig = pylab.figure()
> >>
> >> Josef
> >>
> >>
> >>>> ax = fig.add_subplot(1,1,1)
> >>>> pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
> >>>> n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
> >>>>
> >>>>
> range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
> >>>> normed=1, facecolor='y', alpha=0.5)
> >>>> ax.set_xlabel(r'\Large$ \rm{values}$')
> >>>> ax.set_ylabel(r'\Large Delatavalue/Value')
> >>>>
> >>>>
> >>>>
> gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
> >>>>
> >>>>
> gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
> >>>> pylab.plot(gausx,gaus, color='red', lw=2)
> >>>> ax.set_xlim(-1.5, 1.5)
> >>>> ax.grid(True)
> >>>>
> >>>> Sebastian wrote:
> >>>>
> >>>>> Did you try using the parameter range?
> >>>>> I do something like this.
> >>>>> regards
> >>>>>
> >>>>> ax = fig.add_subplot(1,1,1)
> >>>>> pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
> >>>>> n, bins, patches = pylab.hist(values,
> bins=math.sqrt(len(values)),
> >>>>>
> >>>>>
> range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
> >>>>> normed=1, facecolor='y', alpha=0.5)
> >>>>> ax.set_xlabel(r'\Large$ \rm{values}$')
> >>>>> ax.set_ylabel(r'\Large Delatavalue/Value')
> >>>>>
> >>>>>
> >>>>>
> gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
> >>>>>
> >>>>>
> gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
> >>>>> pylab.plot(gausx,gaus, color='red', lw=2)
> >>>>> ax.set_xlim(-1.5, 1.5)
> >>>>> ax.grid(True)
> >>>>>
> >>>>>
> >>>>> On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
> >>>>> mailto:chris.bar...@noaa.gov>> wrote:
> >>>>>
> >>>>> josef.p...@gmail.com <mailto:josef.p...@gmail.com> wrote:
> >>>>> > On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
> >>>>> mailto:jsseab...@gmail.com>> wrote:
> >>>>>
> >>>>> >>  This kind of info might be useful to other newcomers
> >>>>> >> somewhere...  <http://www.scipy.org/History_of_SciPy>?
>  Thoughts
> >>>>> on
> >>>>> >> posting this on the wiki here?
> >>>>> >
> >>>>> > I also agree. It will improve with the newly redesigned website
> >>>>> for scipy.org <http://scip

Re: [Numpy-discussion] Quadratic Optimization Problem

2006-11-21 Thread Sebastian Haase
Don't know the complete answer - but try cobyla in scipy (scipy.optimize).

-Sebastian

On Tuesday 21 November 2006 15:44, amit soni wrote:
> Hi,
>
> I need to do a quadratic optimization problem in python
> where the constraints are quadratic and objective function is linear.
> Its a convex optimization problem.
>
> What are the possible choices to do this.
>
> Thanks
> Amit
>
>
>
>
> ___
>_ Sponsored Link
>
> Rates near 39yr lows. $420,000 Loan for $1399/mo.
> Calcuate new payment. www.LowerMyBills.com/lre
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ScientificPython with NumPy

2006-11-24 Thread Sebastian Haase

Hi,
Just out of curiosity: Can I ask what is special about a
Geometry.Vector  ? What is the difference to a normal numpy array ?

I hope this is a good place to as this ?
Thanks,  -Sebastian Haase

On 11/24/06, Gary Ruben <[EMAIL PROTECTED]> wrote:


Hi Konrad,
I can report that 2.7.1 installs OK on WinXP with the most recent
Enthought Python. I used mingw, so replaced the build instruction with
python setup.py build --numpy --compiler=mingw32
All I tried was the Geometry.Vector class which my older code uses
heavily - this behaves well.
regards,
Gary

[EMAIL PROTECTED] wrote:
> Those who would like to test-drive ScientificPython with NumPy can do
> so now: just download version 2.7.1 from
>
>   http://sourcesup.cru.fr/
>
> and install using
>
>   python setup.py build --numpy
>   python setup.py install --numpy
>
> Note that I have relied on NumPy's automatic code converter to
> identify the places that needed to be changed. In other words, there
> could still be incompatibilities, of the kinds that the converter
> cannot spot. The changes are mine, however, as my goal was to have
> Python code compatible with both Numeric and NumPy.
>
> Bug reports are welcome, ideally through the Web site mentioned above.
>
> Konrad.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_DIMS problem

2006-12-20 Thread Sebastian Haase

On 12/20/06, Gennan Chen <[EMAIL PROTECTED]> wrote:


 Hi!


I have problem with this function call under FC6 X86_64 for my own numpy
extension


printf("\n %d %d %d",
PyArray_DIM(imgi,0),PyArray_DIM(imgi,1),PyArray_DIM(imgi,2))


it gave me


166 256 256


if I tried:


int *dim;
dim = PyArray_DIMS(imgi)
printf("\n %d %d %d", dim[0], dim[1], dim[2]);


it gave me 166 0 256




Hi -
maybe I'm dense here -
but how is this /supposed/ to work ? Is PyArray_DIMS allocating some memory
that never gets freed !?
I thought "tuples" in C had to always be passed into a function, so that
that function could modify it, as in:

const int maxNDim = 20;
int dim[maxNDim];
PyArray_DIMS(imgi,  dim);


What am I missing ... ?
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_DIMS problem

2006-12-21 Thread Sebastian Haase

Hi!

On 12/20/06, Gennan Chen <[EMAIL PROTECTED]> wrote:


 Hi!


I have problem with this function call under FC6 X86_64 for my own numpy
extension


printf("\n %d %d %d",
PyArray_DIM(imgi,0),PyArray_DIM(imgi,1),PyArray_DIM(imgi,2))


it gave me


166 256 256


if I tried:


int *dim;
dim = PyArray_DIMS(imgi)
printf("\n %d %d %d", dim[0], dim[1], dim[2]);


it gave me 166 0 256



Hi - maybe I'm dense here - but how is this /supposed/ to work ? Is
PyArray_DIMS
allocating some memory that never gets freed !?  I thought "tuples" in C had
to always be passed into a function, so that that function could modify it,
as in:
const int maxNDim = 20;
int dim[maxNDim];
PyArray_DIMS(imgi,  dim);

--- What am I missing ... ?
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How do I test if an array contains NaN ?

2007-01-04 Thread Sebastian Haase
Hi!

Simple question:
How do I test if an array contains NaN ?
Or others like inf ...?

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How do I test if an array contains NaN ?

2007-01-04 Thread Sebastian Haase
On 1/4/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 1/4/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > How do I test if an array contains NaN ?
> > Or others like inf ...?
>
> isnan()
> ~isfinite()
> any()

Aah ! thanks,
you mean I have to create an intermediate array that tells me for
every element if is's a nan
and then check any( ... ) on this !?

That's OK for what I need -- seems excessive for large arrays though 

Thanks again,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-04 Thread Sebastian Haase
Hi!
I'm continuing my code conversion to numpy.
Trying to track down a segmentation fault I found
1) ndimage  crashed   because I was feeding (nan,nan) as shift value .
2) but where did I get the nan from  ? I just found that there is code
like this: (*paraphrased*)
size = N.array( [255] )
size.resize( 2 )

... trying this interactively I get an exception:
>>> a = N.array([5])
>>> a
[5]
>>> a.resize(2)
Traceback (most recent call last):
  File "", line 1, in ?
ValueError: cannot resize an array that has been referenced or is referencing
another array in this way.  Use the resize function
>>> N.__version__
'1.0.2.dev3487'

in any case:  inside the script it somehow generated a nan  --- is
there a bug in numpy !?


I remember that there was some discussion about resize  !?
What should I add to the Scipy Wiki   numarray page about this  ?
(   http://www.scipy.org/Converting_from_numarray  )

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-04 Thread Sebastian Haase
On 1/4/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:

> >>> N.__version__
> '1.0.2.dev3487'
>
> in any case:  inside the script it somehow generated a nan  --- is
> there a bug in numpy !?

No bug here ! see below !

> I remember that there was some discussion about resize  !?
> What should I add to the Scipy Wiki   numarray page about this  ?
> (   http://www.scipy.org/Converting_from_numarray  )
>

OK - the reference problem in my interactive shell came from the
implicit '_' variable that always references the last result. But
maybe even more worry some for the converting from numarray is this:
>>> a = N.array([5])
>>> 999  # to kill '_' - reference
999
>>> a.resize(2)
>>> a
[5 0]

in numarray you would get
>>> a = na.array([5])
>>> a.resize(2)
[5 5]
>>> a
[5 5]

!! why is numpy filling with 0s and numarray repeats (cycles I think
is more what it does !) the last element(s) ??

How did numeric do this ?

- Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] compress numpy vs .numarray

2007-01-04 Thread Sebastian Haase
Hi!
when calling compress
I get this error message after moving to numpy:

ValueError: 'condition must be 1-d array'

Is the reason for this the change of the default axis from
axis=0
to
axis=None

What does axis=None mean in this case !?

Thanks,
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] compress numpy vs .numarray

2007-01-04 Thread Sebastian Haase
On 1/4/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> Hi!
> when calling compress
> I get this error message after moving to numpy:
>
> ValueError: 'condition must be 1-d array'
>
> Is the reason for this the change of the default axis from
> axis=0
> to
> axis=None
>
> What does axis=None mean in this case !?
>

Is N.extract()  the equivalent of numarray.compress()  ?


-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] recompiling needed for binary module after numpy 1.0

2007-01-05 Thread Sebastian Haase
Hi!
After I upgraded from numpy 1.0 to 1.0.1
I get an abort in a C-module:

RuntimeError: module compiled against version 102 of C-API but this
version of numpy is 109
Fatal Python error: numpy.core.multiarray failed to import... exiting.
/opt/bin/priithonN: line 37:  1409 Aborted $PY $*

I thought that within numpy 1.0 there was no recompile
for external C-modules needed !?

Please explain.

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-05 Thread Sebastian Haase
On 1/5/07, Russell E Owen <[EMAIL PROTECTED]> wrote:
> In article
> <[EMAIL PROTECTED]>,
>  "Sebastian Haase" <[EMAIL PROTECTED]> wrote:
>
> > On 1/4/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > 
> > > >>> N.__version__
> > > '1.0.2.dev3487'
> > >
> > > in any case:  inside the script it somehow generated a nan  --- is
> > > there a bug in numpy !?
> >
> > No bug here ! see below !
> >
> > > I remember that there was some discussion about resize  !?
> > > What should I add to the Scipy Wiki   numarray page about this  ?
> > > (   http://www.scipy.org/Converting_from_numarray  )
> > >
> >
> > OK - the reference problem in my interactive shell came from the
> > implicit '_' variable that always references the last result. But
> > maybe even more worry some for the converting from numarray is this:
> > >>> a = N.array([5])
> > >>> 999  # to kill '_' - reference
> > 999
> > >>> a.resize(2)
> > >>> a
> > [5 0]
> >
> > in numarray you would get
> > >>> a = na.array([5])
> > >>> a.resize(2)
> > [5 5]
> > >>> a
> > [5 5]
> >
> > !! why is numpy filling with 0s and numarray repeats (cycles I think
> > is more what it does !) the last element(s) ??
> >
> > How did numeric do this ?
>
> Here's what I get for Numeric 24.2:
>
> >>> import Numeric as N
> >>> a = N.array([5])
> >>> a
> array([5])
> >>> a.resize(2)
> Traceback (most recent call last):
>   File "", line 1, in 
> ValueError: cannot resize an array that has been referenced or is
> referencing
>   another array in this way.  Use the resize function.
> >>> N.resize(a, 2)
> Traceback (most recent call last):
>   File "", line 1, in 
>   File
> "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac
> kages/Numeric/Numeric.py", line 422, in resize
> total_size = multiply.reduce(new_shape)
> ValueError: dimension not in array
> >>> N.resize(a, [2])
> array([5, 5])
>
Thanks for testing -- to be complete I will append my tests for
numarray and numpy.
I would consider the different result of function vs. method a bug !!
Please comment , Sebastian.

>>> import numarray as na
>>> import numpy as N
>>> a = na.array([5])
>>> a.resize(2)
[5 5]
>>> a
[5 5]
>>> a = na.array([5])
>>> na.resize(a,2)
[5 5]
>>> a
[5]
>>> a = N.array([5])
>>> a.resize(2)
>>> a
[5 0]
>>> a = N.array([5])
>>> N.resize(a, 2)
[5 5]
>>> a
[5]

### Note: [5 5]  vs. [5 0]  !!!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recompiling needed for binary module after numpy 1.0

2007-01-05 Thread Sebastian Haase
You are right again, of course !
Sorry for the noise - I should have just checked the date of my so
file (which is August 15)

At least I understood the "official numpy intention of version 1.0"
right then - just  checking ...

Thanks,
Sebastian.

On 1/5/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > Hi!
> > After I upgraded from numpy 1.0 to 1.0.1
> > I get an abort in a C-module:
> >
> > RuntimeError: module compiled against version 102 of C-API but this
> > version of numpy is 109
> > Fatal Python error: numpy.core.multiarray failed to import... exiting.
> > /opt/bin/priithonN: line 37:  1409 Aborted $PY $*
> >
> > I thought that within numpy 1.0 there was no recompile
> > for external C-modules needed !?
>
> Ummm, the number was bumped before the 1.0 release.
>
> Bump of the version number:
>   http://projects.scipy.org/scipy/numpy/changeset/3361
>
> Tagging the 1.0 release:
>   http://projects.scipy.org/scipy/numpy/changeset/3396/tags/1.0
>
> Are you sure that you weren't upgrading from a beta instead of the 1.0 
> release?
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
>  that is made terrible by our own mad attempt to interpret it as though it had
>  an underlying truth."
>   -- Umberto Eco
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recompiling needed for binary module after numpy 1.0

2007-01-05 Thread Sebastian Haase
Hi,
All I did is recompiling my (on source code file) C extension. I made
sure that it was including the current numpy header files.

I did not use anything related to distutils ( no "python setup.py ..." ).

Does that answer your question ?

-Sebastian


On 1/5/07, belinda thom <[EMAIL PROTECTED]> wrote:
> Sebastian,
>
> I had the same problem awhile back; I'm curious---how'd you fix your
> mismatch (i.e. where'd you go to get code, did you run python
> setup?, ...). I realize these are very basic questions, but I've
> never installed anything from source (aside from using easy_install),
> so it would be nice to extend my very limited knowledge in this area.
>
> Thx,
>
> --b
>
> 
>
>
> You are right again, of course !
> Sorry for the noise - I should have just checked the date of my so
> file (which is August 15)
>
> At least I understood the "official numpy intention of version 1.0"
> right then - just  checking ...
>
> Thanks,
> Sebastian.
>
> On 1/5/07, Robert Kern  wrote:
>  > Sebastian Haase wrote:
>  > > Hi!
>  > > After I upgraded from numpy 1.0 to 1.0.1
>  > > I get an abort in a C-module:
>  > >
>  > > RuntimeError: module compiled against version 102 of C-API
> but this
>  > > version of numpy is 109
>  > > Fatal Python error: numpy.core.multiarray failed to import...
> exiting.
>  > > /opt/bin/priithonN: line 37:  1409 Aborted $PY $*
>  > >
>  > > I thought that within numpy 1.0 there was no recompile
>  > > for external C-modules needed !?
>  >
>  > Ummm, the number was bumped before the 1.0 release.
>  >
>  > Bump of the version number:
>  >   http://projects.scipy.org/scipy/numpy/changeset/3361
>  >
>  > Tagging the 1.0 release:
>  >   http://projects.scipy.org/scipy/numpy/changeset/3396/tags/1.0
>  >
>  > Are you sure that you weren't upgrading from a beta instead of the
> 1.0 release?
>  >
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recompiling needed for binary module after numpy 1.0

2007-01-05 Thread Sebastian Haase
package header would usually be found in
PREFIX/include/python2.x

(with PREFIX being something like: /usr/lib/python24  or C:/python24 )


However for obscure reasons  in numpy
the header files are in

PREFIX/lib/python2.x/site-packages/numpy/core


You have to "somehow" add this as '-I '  to your compiler
command line -- this is what setup.py would do for you.

( maybe you need to look in /usr/local/lib/... )

-Sebastian


On 1/5/07, belinda thom <[EMAIL PROTECTED]> wrote:
>
> On Jan 5, 2007, at 5:32 PM, Sebastian Haase wrote:
>
> > Hi,
> > All I did is recompiling my (on source code file) C extension. I made
> > sure that it was including the current numpy header files.
>
> Where are these files located? What command did you use? (The gorey
> details would help me quite a bit, as I'm to compile-level installs,
> having usually relied on things like macports port to do the work for
> me)...
>
> >
> > I did not use anything related to distutils ( no "python
> > setup.py ..." ).
> >
> > Does that answer your question ?
>
> Almost...
>
> >
> > -Sebastian
>
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-08 Thread Sebastian Haase
On 1/8/07, Stefan van der Walt <[EMAIL PROTECTED]> wrote:
> On Fri, Jan 05, 2007 at 01:57:50PM -0800, Russell E Owen wrote:
> > I also checked the numpy 1.0.1 help and I confess I don't understand at
> > all what it claims to do if the new size is larger. It first says it
> > repeats a and then it says it zero-fills the output.
> >
> > >>> help(numpy.resize)
> > Help on function resize in module numpy.core.fromnumeric:
> >
> > resize(a, new_shape)
> > resize(a,new_shape) returns a new array with the specified shape.
> > The original array's total size can be any size. It
> > fills the new array with repeated copies of a.
> >
> > Note that a.resize(new_shape) will fill array with 0's
> > beyond current definition of a.
>
> The docstring refers to the difference between
>
> N.resize(x,6)
>
> and
>
> x.resize(6)
>
Hi Stéfan,

Why is there a needed for this very confusing dualty !?
I would almost like to file a bug report on this !

(It definitily broke "backwards compatibility" for my code coming from
numarray )

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-08 Thread Sebastian Haase
On 1/8/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
>
> >On 1/8/07, Stefan van der Walt <[EMAIL PROTECTED]> wrote:
> >
> >
> >Hi Stéfan,
> >
> >Why is there a needed for this very confusing dualty !?
> >I would almost like to file a bug report on this !
> >
> >(It definitily broke "backwards compatibility" for my code coming from
> >numarray )
> >
> >
> It's a remnant of trying to merge all the packages (done before the idea
> to have separate compatibility layers emerged).
>
> This one in particular is done because resize is used in this way at
> least once in the library code and so has a permanance it otherwise
> would not have enjoyed.
>
> I'm not sure we can change it now.  At least not until 1.1
>
> -Travis

I would suggest treating this as a real bug!
Then it could be fixed immediately.

I don't think that many people are relying on this behaviour, and if
so:  it is just so confusion that my guess is that they would be
always using EITHER the function OR the method - and run into this
"bug" as soon as they accidentally use the other one.

Bug-fixes don't have to keep backwards compatibility.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-08 Thread Sebastian Haase
On 1/8/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
>
> > I would suggest treating this as a real bug!
> > Then it could be fixed immediately.
>
> Deliberate design decisions don't turn into bugs just because you disagree 
> with
> them. Neither do those where the original decider now disagrees with them.
>
Please explain again what the original decision was based on.
I remember that there was an effort at some point to make methods as
functions more consistent.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] functions vs. methods

2007-01-08 Thread Sebastian Haase
On 1/8/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Alan G Isaac wrote:
>
> >On Mon, 8 Jan 2007, Sebastian Haase apparently wrote:
> >
> >
> >>Please explain again what the original decision was based
> >>on.
> >>
> >>
> >
> >I think the real questions are:
> >what do the numpy developers want in the future,
> >and what is the right path from here to there?
> >
> >
> >
> >>I remember that there was an effort at some point to make
> >>methods as functions more consistent.
> >>
> >>
> >
> >Yes, but my (user's) memory is that there were a couple
> >exceptions for historical reasons.  As a user now of
> >essentially only numpy, I too would like to see those
> >exceptions go away, both for personal convenience, and for
> >the benefit of new users.

Could anyone provide a complete list  of the
"couple exceptions for historical reasons..." mentioned by Alan !?

-Sebastian.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] random permutation

2007-01-13 Thread Sebastian Haase
On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 1/11/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > Keith Goodman wrote:
> > > Why is the first element of the permutation always the same? Am I
> > > using random.permutation in the right way?
> >
> > >>> M.__version__
> > > '1.0rc1'
> >
> > This has been fixed in more recent versions.
> >
> >   http://projects.scipy.org/scipy/numpy/ticket/374
>
> I don't see any unit tests for numpy.random. I guess randomness is hard to 
> test.
>
> Would it help to seed the random number generator and at least check
> that you get the same result you got before? No existing problems
> would be found. But new ones might be caught.
>
Hi,
Is it guaranteed that a given seed produces the same sequence of
rand-number between different platforms ?
I thought this might only be guaranteed for "any given computer" to
reproduce the same numbers.

-Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] random permutation

2007-01-13 Thread Sebastian Haase
On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 1/13/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > > On 1/11/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > > > Keith Goodman wrote:
> > > > > Why is the first element of the permutation always the same? Am I
> > > > > using random.permutation in the right way?
> > > >
> > > > >>> M.__version__
> > > > > '1.0rc1'
> > > >
> > > > This has been fixed in more recent versions.
> > > >
> > > >   http://projects.scipy.org/scipy/numpy/ticket/374
> > >
> > > I don't see any unit tests for numpy.random. I guess randomness is hard 
> > > to test.
> > >
> > > Would it help to seed the random number generator and at least check
> > > that you get the same result you got before? No existing problems
> > > would be found. But new ones might be caught.
> > >
> > Hi,
> > Is it guaranteed that a given seed produces the same sequence of
> > rand-number between different platforms ?
> > I thought this might only be guaranteed for "any given computer" to
> > reproduce the same numbers.
>
> I hope, and expect, that it is system independent.
>
> Here's what I get:
>
> >> rs = numpy.random.RandomState([123, 901, 789])
> >> rs.randn(4,1)
>
> array([[ 0.76072026],
>   [ 1.27712191],
>   [ 0.03497453],
>   [ 0.09056668]])
> >> rs.rand(4,1)
>
> array([[ 0.184306  ],
>   [ 0.58967936],
>   [ 0.52425903],
>   [ 0.33389408]])
> >> numpy.__version__
> '1.0.1'
>
> Linux kel 2.6.18-3-686 #1 SMP Mon Dec 4 16:41:14 UTC 2006 i686 GNU/Linux
import numpy
rs = numpy.random.RandomState([123, 901, 789])
rs.randn(4,1)
[[ 0.76072026]
[ 1.27712191]
[ 0.03497453]
[ 0.09056668]]
rs.rand(4,1)
[[ 0.184306  ]
[ 0.58967936]
[ 0.52425903]
[ 0.33389408]]
numpy.__version__
'1.0rc1'

Windows XP- pentium4  - (non current numpy) -

Looks promising - but how about PowerPC macs ...
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] random permutation

2007-01-13 Thread Sebastian Haase
On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 1/13/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > > On 1/13/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > > > On 1/13/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > > > > On 1/11/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > > > > > Keith Goodman wrote:
> > > > > > > Why is the first element of the permutation always the same? Am I
> > > > > > > using random.permutation in the right way?
> > > > > >
> > > > > > >>> M.__version__
> > > > > > > '1.0rc1'
> > > > > >
> > > > > > This has been fixed in more recent versions.
> > > > > >
> > > > > >   http://projects.scipy.org/scipy/numpy/ticket/374
> > > > >
> > > > > I don't see any unit tests for numpy.random. I guess randomness is 
> > > > > hard to test.
> > > > >
> > > > > Would it help to seed the random number generator and at least check
> > > > > that you get the same result you got before? No existing problems
> > > > > would be found. But new ones might be caught.
> > > > >
> > > > Hi,
> > > > Is it guaranteed that a given seed produces the same sequence of
> > > > rand-number between different platforms ?
> > > > I thought this might only be guaranteed for "any given computer" to
> > > > reproduce the same numbers.
> > >
> > > I hope, and expect, that it is system independent.
> > >
> > > Here's what I get:
> > >
> > > >> rs = numpy.random.RandomState([123, 901, 789])
> > > >> rs.randn(4,1)
> > >
> > > array([[ 0.76072026],
> > >   [ 1.27712191],
> > >   [ 0.03497453],
> > >   [ 0.09056668]])
> > > >> rs.rand(4,1)
> > >
> > > array([[ 0.184306  ],
> > >   [ 0.58967936],
> > >   [ 0.52425903],
> > >   [ 0.33389408]])
> > > >> numpy.__version__
> > > '1.0.1'
> > >
> > > Linux kel 2.6.18-3-686 #1 SMP Mon Dec 4 16:41:14 UTC 2006 i686 GNU/Linux
> > import numpy
> > rs = numpy.random.RandomState([123, 901, 789])
> > rs.randn(4,1)
> > [[ 0.76072026]
> > [ 1.27712191]
> > [ 0.03497453]
> > [ 0.09056668]]
> > rs.rand(4,1)
> > [[ 0.184306  ]
> > [ 0.58967936]
> > [ 0.52425903]
> > [ 0.33389408]]
> > numpy.__version__
> > '1.0rc1'
> >
> > Windows XP- pentium4  - (non current numpy) -
> >
> > Looks promising - but how about PowerPC macs ...
>
> The random numbers are generated by a fixed algorithm. So they are not
> random at all. As long as everyone is using float64 I would think we'd
> all get the same (pseudo) random numbers.

I understand this - but I thought the algorithm might involve some
rounding(-error) cases that would produce different results especially
when changing the CPU (intel vs. PowerPC). I found this to be true
even for non pseudo-random code that between CPU-types results were
different within a given epsilon.


-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to recognize a numpy-scalar type at the C level and how to convert

2007-01-16 Thread Sebastian Haase
Hi!
After converting to numpy my SWIG-wrapped code runs a bunch of error
of this type:

  TypeError: argument number 9: a 'float' is expected,
'numpy.float32(-2.10786056519)' is received

This comes of course from the fact that numpy introduced all these new
scalar types (addressing the shortcoming of the (few) scalar type
provided by standard python).

What would be an easy way to "recognize" such a scalar type in C-API
numpy code  and can I extract the C-scalar value from it ?

Thanks for any hints,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] possible bug in recordarray assigning to a '4u4' field

2007-01-17 Thread Sebastian Haase
Hi,
ProStr = N.rec.array(None,
 formats="u4,f4,u4,u4,4u4",
 names=('count','clock','InitDio',
'nDigital', 'nAnalog'),
 aligned=True, shape=1)
ProStr[0] ['nAnalog']  = 1

I get this error message:

ValueError: shape-mismatch on array construction

ProStr ['nAnalog']  = 1
works fine.

Surprisingly
ProStr['nAnalog']  = [1,2,3,4]
works.

Could someone explain ?
Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to recognize a numpy-scalar type at the C level and how to convert

2007-01-19 Thread Sebastian Haase
Hi again,
I finally bought the numpy-book.
Browsing through the 370 pages I found this:
PyArray IsScalar (op, cls)
Evaluates true if op is an instance of PyArrType Type.

Is this what I need to check for a given scalar type.

Numpy is really much more extensive (comprehensive)  than I thought.

Thanks,
Sebastian Haase




On 1/16/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> Hi!
> After converting to numpy my SWIG-wrapped code runs a bunch of error
> of this type:
>
>   TypeError: argument number 9: a 'float' is expected,
> 'numpy.float32(-2.10786056519)' is received
>
> This comes of course from the fact that numpy introduced all these new
> scalar types (addressing the shortcoming of the (few) scalar type
> provided by standard python).
>
> What would be an easy way to "recognize" such a scalar type in C-API
> numpy code  and can I extract the C-scalar value from it ?
>
> Thanks for any hints,
> Sebastian Haase
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Build numpy without support of "long double' on OS-X

2007-01-25 Thread Sebastian Haase
Hi!
When I try running my code on
panther (10.3) with a numpy that was built on tiger (10.4)
it can't load numpy because of missing symbols
in numpy/core/umath.so
The symbols are
_acoshl$LDBL128
_acosl$LDBL128
_asinhl$LDBL128

(see my post from 5 oct 2006:
http://permalink.gmane.org/gmane.comp.python.numeric.general/8521 )

I traced the problem to the libmx system library.

Since I really don't need "long double" (128 bit) operations - I was
wondering if there is a flag to just turn them of?
Will SciPy built with this ? (Is there an equivalent flag maybe ?)

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build numpy without support of "long double' on OS-X

2007-01-25 Thread Sebastian Haase
On 1/25/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > Hi!
> > When I try running my code on
> > panther (10.3) with a numpy that was built on tiger (10.4)
> > it can't load numpy because of missing symbols
> > in numpy/core/umath.so
> > The symbols are
> > _acoshl$LDBL128
> > _acosl$LDBL128
> > _asinhl$LDBL128
> >
> > (see my post from 5 oct 2006:
> > http://permalink.gmane.org/gmane.comp.python.numeric.general/8521 )
> >
> > I traced the problem to the libmx system library.
> >
> > Since I really don't need "long double" (128 bit) operations - I was
> > wondering if there is a flag to just turn them of?
>
> Generally speaking, you need to build binaries on the lowest-versioned OS X 
> that
> you intend to run on.
>
The problem with building on 10.3 is that it generally comes only with
gcc 3.3.   I remember that some things require gcc4 - right ?

I just found this
http://developer.apple.com/documentation/DeveloperTools/Conceptual/CppRuntimeEnv/Articles/LibCPPDeployment.html
which states:
"Support for the 128-bit long double type was not introduced until Mac
OS X 10.4."

The easiest would be to be able to disable the long double functions.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build numpy without support of "long double' on OS-X

2007-01-25 Thread Sebastian Haase
On 1/25/07, Steve Lianoglou <[EMAIL PROTECTED]> wrote:
> >> Generally speaking, you need to build binaries on the lowest-
> >> versioned OS X that
> >> you intend to run on.
> >>
> > The problem with building on 10.3 is that it generally comes only with
> > gcc 3.3.   I remember that some things require gcc4 - right ?
>
> I think that might only bite you if you want to compile universal
> binaries, though I'm not sure if there are any other problems w/
> gcc3.3, I'm pretty sure that's the big one.
>
> Of course .. that really shouldn't matter if you're just compiling it
> for yourself for just that cpu.
>
On the contrary !
I'm trying to provide a precompiled build of numpy together with a
couple a handy
functions and classes that I made myself,
to establish Python as a development platform in
multi-dimensional image analysis.

I call it Priithon and it is build around wxPython, pyOpenG, SWIG and
(of course) numpy [numarray until recently].
I have been working on this project for a couple years as part of my PhD,
http://www.ucsf.edu/sedat/Priithon/PriithonHandbook.html

And I'm trying to have all of
Linux, Windows, OS-X (PPC) and OS-X (intel)
supported.

Now, I'm hoping to be able to include a version of 10.4's libmx.dyso 
that might be an acceptable workaround.

Thanks,
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build numpy without support of "long double' on OS-X

2007-01-26 Thread Sebastian Haase
On 1/26/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
>
> > The easiest would be to be able to disable the long double functions.
>
> Actually, there are a number of other configuration items that are discovered 
> by
> compiling small C programs and running them. There are a number of them that
> might give different answers on 10.3.9 if they could be compiled there. You
> would have to replace that configuration system with something that could load
> the configuration from a file or some other source.
>
> Since that's a big project, you might be content with modifying the code to
> hard-code the results for your own builds and documenting the differences from
> the result of a build from official sources.
>

I have to admit that I don't know what "configuration items" are -
but I can report that the simple approach of copying libmx.A.dyso
from a 10.4 system to a directory which I added to DYLD_LIBRARY_PATH
seems to work
on the 10.3.9 PPC machine for the first tests I ran so far.

(disclaimer: as I said I don't even use "long double" so I did not
test this of course)

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Selected and altering a submatrix

2007-01-29 Thread Sebastian Haase
How about
mat[0:3, 4:7] += 1


-Sebastian

On 1/29/07, Steve Lianoglou <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I was just curious what the "correct" (fast) way to select and alter
> a submatrix.
>
> For example, say I have a 10x10 array and only want to add some
> number to the elements in the submatrix that consists of the [0,1,2]
> th rows, and [4,5,6]th colums.
>
> You can imagine that those rows/cols select a square in the top-
> middle of the 10x10 which I want to alter.
>
> The only way I can get this to work is if I iterate over the indices
> in one of the dimensions (say the rows) and use the column indices to
> slice out the relevant elements to add to .. is there a NumPy-thonic
> way to do this:
>
> ===
> import numpy as N
> mat = N.zeros((10,10))
> rows = [0,1,2]
> cols = [4,5,6]
>
> for row in rows:
>mat[row,cols] += 1
>
> 
>
> I found something on the lists from a few years back that was in
> reference to numeric or numarray that suggested doing some gymnastics
> with take/put, but it still seemed as if there was no way to slice
> out this view of a matrix w/o making a copy.
>
> Thanks,
> -steve
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Selected and altering a submatrix

2007-01-29 Thread Sebastian Haase
How about
mat[0:3, 4:7] += 1


-Sebastian

On 1/29/07, Steve Lianoglou <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I was just curious what the "correct" (fast) way to select and alter
> > a submatrix.
> >
> > For example, say I have a 10x10 array and only want to add some
> > number to the elements in the submatrix that consists of the [0,1,2]
> > th rows, and [4,5,6]th colums.
> >
> > You can imagine that those rows/cols select a square in the top-
> > middle of the 10x10 which I want to alter.
> >
> > The only way I can get this to work is if I iterate over the indices
> > in one of the dimensions (say the rows) and use the column indices to
> > slice out the relevant elements to add to .. is there a NumPy-thonic
> > way to do this:
> >
> > ===
> > import numpy as N
> > mat = N.zeros((10,10))
> > rows = [0,1,2]
> > cols = [4,5,6]
> >
> > for row in rows:
> >mat[row,cols] += 1
> >
> > 
> >
> > I found something on the lists from a few years back that was in
> > reference to numeric or numarray that suggested doing some gymnastics
> > with take/put, but it still seemed as if there was no way to slice
> > out this view of a matrix w/o making a copy.
> >
> > Thanks,
> > -steve
> > ___
> > Numpy-discussion mailing list
> > Numpy-discussion@scipy.org
> > http://projects.scipy.org/mailman/listinfo/numpy-discussion
> >
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] memmap close() and flush()

2007-01-30 Thread Sebastian Haase
Hi!
Do numpy memmap have a way of explicitly
flushing data to disk
and/or
closing the memmap.

In numarray these were methods called
memmappedArr.flush()
and
memmappedArr.close()

Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memmap close() and flush()

2007-01-31 Thread Sebastian Haase
On 1/31/07, Fernando Perez <[EMAIL PROTECTED]> wrote:
> On 1/31/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> > Fernando Perez wrote:
>
> > I don't know.  If you have other things pointing to it, should you
> > really close it?
>
> Well, it's like a file: you can close it because you've decided it's
> time to close it, and I think it's better that other references get an
> exception if they try to write to it when they shouldn't:
>
> In [2]: file1 = open('foo','w')
>
> In [3]: file1.write('Some text')
>
> In [4]: file2 = file1
>
> In [5]: file1.close()
>
> In [6]: file2.write("I'm not the original owner but I'll try to write anyway")
> ---
> exceptions.ValueErrorTraceback (most
> recent call last)
>
> /home/fperez/
>
> ValueError: I/O operation on closed file
>
>
> This seems like the right API to me.
>
> > At any rate you have access to the mmap file through the _mmap
> > attribute.  So, you can always do
> >
> > self._mmap.close()
>
> I don't like an API that encourages access to internal semi-private
> members, and it seems to me that closing the object is a reasonably
> top-level operation whose impmlementation details (in this case the
> existence and name of the _mmap member) should be encapsulated out.
>
> Cheers,
>
> f

After asking this question rather for acedemical reasons, I now
realize that I indead must have a "somehow dangling" (i.e. not
deleted) reference problem.

I create my Medical-Image-File-class object on a 400MB file repeatedly
(throwing the result away)  and after 5 calles I get:
  File "/jws30/haase/PrLinN/Priithon/Mrc.py", line 55, in __init__
self.m = N.memmap(path, mode=mode)
  File "/home/haase/qqq/lib/python/numpy/core/memmap.py", line 67, in __new__
mm = mmap.mmap(fid.fileno(), bytes, access=acc)
EnvironmentError: [Errno 12] Cannot allocate memory

Calling gc.collect()  seams to clean things up and I can create 4-5
times afterwards, before running out of memory space again.

Note: My code is based on code that was tested and worked using numarray.


Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memmap close() and flush()

2007-01-31 Thread Sebastian Haase
On 1/31/07, Fernando Perez <[EMAIL PROTECTED]> wrote:
> On 1/31/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> > Fernando Perez wrote:
>
> > I don't know.  If you have other things pointing to it, should you
> > really close it?
>
> Well, it's like a file: you can close it because you've decided it's
> time to close it, and I think it's better that other references get an
> exception if they try to write to it when they shouldn't:
>
> In [2]: file1 = open('foo','w')
>
> In [3]: file1.write('Some text')
>
> In [4]: file2 = file1
>
> In [5]: file1.close()
>
> In [6]: file2.write("I'm not the original owner but I'll try to write anyway")
> ---
> exceptions.ValueErrorTraceback (most
> recent call last)
>
> /home/fperez/
>
> ValueError: I/O operation on closed file
>
>
> This seems like the right API to me.
>
> > At any rate you have access to the mmap file through the _mmap
> > attribute.  So, you can always do
> >
> > self._mmap.close()
>
> I don't like an API that encourages access to internal semi-private
> members, and it seems to me that closing the object is a reasonably
> top-level operation whose impmlementation details (in this case the
> existence and name of the _mmap member) should be encapsulated out.
>
After asking this question rather for acedemical reasons, I now
realize that I indead must have a "somehow dangling" (i.e. not
deleted) reference problem.

I create my Medical-Image-File-class object on a 400MB file repeatedly
(throwing the result away)  and after 5 calles I get:
 File "/jws30/haase/PrLinN/Priithon/Mrc.py", line 55, in __init__
   self.m = N.memmap(path, mode=mode)
 File "/home/haase/qqq/lib/python/numpy/core/memmap.py", line 67, in __new__
   mm = mmap.mmap(fid.fileno(), bytes, access=acc)
EnvironmentError: [Errno 12] Cannot allocate memory

Calling gc.collect()  seams to clean things up and I can create 4-5
times afterwards, before running out of memory space again.

Note: My code is based on code that was tested and worked using numarray.


Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.scipy.org refers to sourceforge mailing list

2007-01-31 Thread Sebastian Haase
Hi,
On the numpy page:
http://numpy.scipy.org/
the is this text:
"""
Questions?  Ask them at the numpy-discussion@lists.sourceforge.net mailing list
"""

Is sourceforge.net  still used for numpy ?

Also I think it should point to the list sign-up-page since only
members are allowed to posts to the list - is this correct ?

- Sebastian.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] usage of __del__ in python classes

2007-01-31 Thread Sebastian Haase
Hi,
Before I start I want to admit that I don't understand much about
this. I just saw that the memmap class defines __del__ and that I had
problems in the past when I added a 'def __del__' to a class of mine.
So here is a quote, I would like to know if this is "standard
knowledge" on this list or not.

# I found the info originally here: http://arctrix.com/nas/python/gc/
# Circular references which are garbage are detected when the
optional cycle detector is enabled (it's on by default), but can only
be cleaned up if there are no Python-level __del__() methods involved.
Refer to the documentation for the 'gc' module for more information
about how __del__() methods are handled by the cycle detector,
particularly the description of the garbage value. Notice: [warning]
Due to the precarious circumstances under which __del__() methods are
invoked, exceptions that occur during their execution are ignored, and
a warning is printed to sys.stderr instead. Also, when __del__() is
invoked in response to a module being deleted (e.g., when execution of
the program is done), other globals referenced by the __del__() method
may already have been deleted. For this reason, __del__() methods
should do the absolute minimum needed to maintain external invariants.


Cheers,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Sebastian Haase
Here is a small c program that we used more than a year ago to confirm
that tiger is really doing a 64-bit malloc (on G5).


#include 
#include 
int main()  {  size_t n;  void *p;  double gb;
  for(gb=10;gb>.3;gb-=.5) {
n= 1024L * 1024L * 1024L * gb;
p = malloc( n );
printf("%12lu %4.1lfGb %p\n",n,n/1024./1024./1024.,p);
free(p); }  return 0;  }


Hope this helps anyone.
Sebastian


On 2/1/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Louis Wicker wrote:
>
> > Travis:
> >
> > yes it does.  Its the Woodcrest server chip
> > <http://www.intel.com/business/xeon/?cid=cim:ggl%7Cxeon_us_woodcrest%7Ck6913%7Cs>
> >  which
> > supports 32 and 64 bit operations.  For example the new Intel Fortran
> > compiler can grab more than 2 GB of memory (its a beta10 version).  I
> > think gcc 4.x can as well.
> >
> Nice.  I didn't know this.
>
> > However, Tiger (OS X 10.4.x) is not completely 64 bit compliant -
> > Leopard is supposed to be pretty darn close.
> >
> > Is there a numpy flag I could try for compilation
>
> It's entirely compiler and system dependent.  NumPy just uses the system
> malloc.  If you can compile it so that the system malloc supports 64-bit
> then O.K. (but you will probably run into trouble unless Python is also
> compiled as a 64-bit application).   From Robert's answer, I guess it is
> impossible under Tiger to compile with 64-bit support.
>
> -Travis
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Sebastian Haase
Here is a small c program that we used more than a year ago to confirm
that tiger is really doing a 64-bit malloc (on G5).


#include 
#include 
int main()  {  size_t n;  void *p;  double gb;
 for(gb=10;gb>.3;gb-=.5) {
   n= 1024L * 1024L * 1024L * gb;
   p = malloc( n );
   printf("%12lu %4.1lfGb %p\n",n,n/1024./1024./1024.,p);
   free(p); }  return 0;  }


Hope this helps anyone.
Sebastian


On 2/1/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Louis Wicker wrote:
>
> > Travis:
> >
> > yes it does.  Its the Woodcrest server chip
> > <http://www.intel.com/business/xeon/?cid=cim:ggl%7Cxeon_us_woodcrest%7Ck6913%7Cs>
> >  which
> > supports 32 and 64 bit operations.  For example the new Intel Fortran
> > compiler can grab more than 2 GB of memory (its a beta10 version).  I
> > think gcc 4.x can as well.
> >
> Nice.  I didn't know this.
>
> > However, Tiger (OS X 10.4.x) is not completely 64 bit compliant -
> > Leopard is supposed to be pretty darn close.
> >
> > Is there a numpy flag I could try for compilation
>
> It's entirely compiler and system dependent.  NumPy just uses the system
> malloc.  If you can compile it so that the system malloc supports 64-bit
> then O.K. (but you will probably run into trouble unless Python is also
> compiled as a 64-bit application).   From Robert's answer, I guess it is
> impossible under Tiger to compile with 64-bit support.
>
> -Travis
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Sebastian Haase
Travis,
Could you explain what a possible downside of this would be !?
It seems that if you don't need to refer to a specific "self" object
that a class-method is what it should - is this not always right !?

-Sebastian



On 2/1/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Travis Oliphant wrote:
> > What is the attitude of this group about the ndarray growing some class
> > methods?
>
> Works for me.
>
> --
> Robert Kern
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array.sum() slower than expected along some array axes?

2007-02-04 Thread Sebastian Haase
On 2/3/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Stephen Simmons wrote:
>
> > The question though is whether all of the inner loop's overhead is
> > necessary.
> > My counterexample using numpy.dot() suggests there's considerable scope
> > for improvement, at least for certain common cases.
>
> Well, yes. You most likely have an ATLAS-accelerated dot(). The ATLAS put a 
> lot
> of work into making matrix products really fast. However, they did so at a 
> cost:
> different architectures use different code. That's not really something we can
> do in the core of numpy without making numpy as difficult to build as ATLAS 
> is.
>
Maybe this argument could be inverted:
maybe numpy could check if ATLAS is installed and automatically switch to the
numpy.dot(numpy.ones(a.shape[0], a.dtype), a)
variant that Stephen suggested.

Of course -- as I see it -- the numpy.ones(...)  part requires lots of
extra memory. Maybe there are other downsides ... !?
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python eats memory like the cookie monster eats cookies

2007-02-04 Thread Sebastian Haase
On 2/4/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Niels Provos wrote:
> > The missing imports are
> >
> > import Numeric # for zeros and ones
> > from scipy.fftpack import fft2,ifft2
> >
> > Curiously, replacing Numeric.zeros with scipy.zeros makes the problem
> > go away.  Why?
>
> Possibly a bug in Numeric.
>
> --
> Robert Kern

Is there *any* support for old Numeric on this list !?

Maybe it should be officially stated that the one way to go is
numpy
and that problems with Numeric ( or numarray ) can only be noticed but
will likely not get fixed

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] memmap on 64bit Linux for > 2 GB files

2007-02-06 Thread Sebastian Haase
Hi,
I finally tried to do the test, to memmap a large file
filesize: 2.8G

a memmap call gives this error:

{{{
>>> N.memmap('20050622-1648-Y_DEMO-1')
Traceback (most recent call last):
  File "", line 1, in ?
  File "/jws30/haase/PrLinN64/numpy/core/memmap.py", line 67, in __new__
mm = mmap.mmap(fid.fileno(), bytes, access=acc)
OverflowError: memory mapped size is too large (limited by C int)
}}}

I'm using a recent numpy on a 64bit Linux (debian etch, kernel:
2.6.16-2-em64t-p4-smp)
{{{
>>> N.__version__
'1.0.2.dev3509'
>>> N.int0

}}}

Is this supposed to work ?

Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] memmap on 64bit Linux for > 2 GB files

2007-02-06 Thread Sebastian Haase
Of course !
Now I remember why I didn't test it yet...

Thanks,
-Sebastian

On 2/6/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > Hi,
> > I finally tried to do the test, to memmap a large file
> > filesize: 2.8G
> >
> > a memmap call gives this error:
> >
> > {{{
> >>>> N.memmap('20050622-1648-Y_DEMO-1')
> > Traceback (most recent call last):
> >   File "", line 1, in ?
> >   File "/jws30/haase/PrLinN64/numpy/core/memmap.py", line 67, in __new__
> > mm = mmap.mmap(fid.fileno(), bytes, access=acc)
> > OverflowError: memory mapped size is too large (limited by C int)
> > }}}
> >
> > I'm using a recent numpy on a 64bit Linux (debian etch, kernel:
> > 2.6.16-2-em64t-p4-smp)
> > {{{
> >>>> N.__version__
> > '1.0.2.dev3509'
> >>>> N.int0
> > 
> > }}}
> >
> > Is this supposed to work ?
>
> You need Python 2.5 for it to work.
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-15 Thread Sebastian Haase
On 2/15/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 2/15/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > I built a new computer: Core 2 Duo 32-bit Debian etch with numpy
> > 1.0.2.dev3546. The repeatability test still fails. In order to make my
> > calculations repeatable I'll have to remove ATLAS. That really slows
> > things down.
>
> Hey, I have no problem with atlas-base and atlas-sse! On my old debian
> box all versions of atlas fail the repeatability test.

You mean on the Core 2 Dua 32-bit only atlas-sse2 causes troubles ?
How does the speed compare atlas-sse2 vs. atlas-see (ignoring the
repeatablity problem)?
-Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-16 Thread Sebastian Haase
On 2/16/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 2/15/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > On 2/15/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > > On 2/15/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> > > > I built a new computer: Core 2 Duo 32-bit Debian etch with numpy
> > > > 1.0.2.dev3546. The repeatability test still fails. In order to make my
> > > > calculations repeatable I'll have to remove ATLAS. That really slows
> > > > things down.
> > >
> > > Hey, I have no problem with atlas-base and atlas-sse! On my old debian
> > > box all versions of atlas fail the repeatability test.
> >
> > You mean on the Core 2 Dua 32-bit only atlas-sse2 causes troubles ?
> > How does the speed compare atlas-sse2 vs. atlas-see (ignoring the
> > repeatablity problem)?
>
> Yes. On my old computer (P4) all three (atlas-base, -sse, -sse2) cause
> problems. On my new computer only sse2 causes a problem.
>
> I only want to know about the speed difference (sse, sse2) if the
> difference is small.

I was just wondering what generally the speed improvement from sse to sse2 is ?
Any tentative number would be fine...
-S.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyCon 2007

2007-02-28 Thread Sebastian Haase
On 2/28/07, Lou Pecora <[EMAIL PROTECTED]> wrote:
>
> --- Travis Oliphant <[EMAIL PROTECTED]> wrote:
>
> >
> > I took the opportunity to go to PyCon this year and
> > met several people
> > there.  I had a really good time although I would
> > have liked to stay
> > longer.   If you want to see the slides for my talk
> > they are here:
> >
> >
> http://us.pycon.org/common/talkdata/PyCon2007/045/PythonTalk.pdf
>
Travis,
very nice overview !

Could the file be renamed to
NumpyTalk.pdf
?

Just a thought...

-Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to round to int (I mean 5.2 to 5 not 5.0)

2007-03-06 Thread Sebastian Haase
Hi,
why does
numpy.round(a)
return a float ?

I need something that I can use as indices for another array. Do I
have to (implicitly) create a temporary array  and use:
N.round(a).astype(N.int)  ?

Or is there a simple, clean and easy way to just round
[1.1 4.8]
into
[1 5]

Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to round to int (I mean 5.2 to 5 not 5.0)

2007-03-06 Thread Sebastian Haase
On 3/6/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:
> On Tue, 6 Mar 2007, Sebastian Haase apparently wrote:
> > why does
> > numpy.round(a)
> > return a float ?
>
> Because it works with a copy of a.
>
> >>> help(N.round)
> Help on function round_ in module numpy.core.fromnumeric:
> round_(a, decimals=0, out=None)
>Returns reference to result. Copies a and rounds to 'decimals' places.
>
>Keyword arguments:
>decimals -- number of decimal places to round to (default 0).
>out -- existing array to use for output (default copy of a).
>
>Returns:
>Reference to out, where None specifies a copy of the original array a.
> >>> a = N.random.random((10,))*100
> >>> a
> array([ 45.01971148,   8.32961759,  39.75272544,  79.76986159,
>23.66331127,  24.25584246,  38.17354106,  16.57977389,
>50.63676986,  83.15113716])
> >>> b = N.empty((10,),dtype='int')
> >>> N.round(a,out=b)
> array([45,  8, 40, 80, 24, 24, 38, 17, 51, 83])
> >>> b
> array([45,  8, 40, 80, 24, 24, 38, 17, 51, 83])
>
>
> Cheers,
> Alan Isaac
>

Would it be a useful / possible enhancement
to add an additional
dtype argument to round !?

So that if out is not given, but dtype is given (e.g. as N.int)
the "copy-array" is created with that dtype ?

Just a thought -- otherwise I might add it to my personal collection
of "useful" functions.

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-discussion Digest, Vol 6, Issue 18

2007-03-09 Thread Sebastian Haase
On 3/9/07, James A. Bednar <[EMAIL PROTECTED]> wrote:
> |  From: Robert Kern <[EMAIL PROTECTED]>
> |  Subject: Re: [Numpy-discussion] in place random generation
> |
> |  Daniel Mahler wrote:
> |  > On 3/8/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
> |
> |  >> Robert thought this might relate to Travis' changes adding
> |  >> broadcasting to the random number generator. It does seem
> |  >> certain that generating small arrays of random numbers has a
> |  >> very high overhead.
> |  >
> |  > Does that mean someone is working on fixing this?
> |
> |  It's not on the top of my list, no.
>
> I just wanted to put in a vote saying that generating a large quantity
> of small arrays of random numbers is quite important in my field, and
> is something that is definitely slowing us down right now.
>
> We often simulate neural networks whose many, many small weight
> matrices need to be initialized with random numbers, and we are seeing
> quite slow startup times (on the order of minutes, even though
> reloading a pickled snapshot of the same simulation once it has been
> initialized takes only a few seconds).
>
> The quality of these particular random numbers doesn't matter very
> much for us, so we are looking for some cheaper way to fill a bunch of
> small matrices with at least passably random values.  But it would of
> course be better if the regular high-quality random number support in
> Numpy were speedy under these conditions...
>
> Jim
>
Hey Jim,

Could you not create all the many arrays to use "one large chunck" of
contiguous memory ?
like: 1) create a large 1D array
2) create all small arrays in a for loop using
numpy.ndarray(buffer=largeArray[offset], shape=..., dtype=...)  ---
you increment offset appropriately during the loop
3) then you can reset all small arrays to  new random numbers with one
call to resetting the large array ((they all have the same statistics
(mean,stddev, type), right ?


Maybe this helps,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Which dtype are supported by numexpr ?

2007-03-09 Thread Sebastian Haase
Hi !
This is really only one question:

Which dtypes are supported by numexpr ?

We are very interested in numexpr !
Where is the latest / most-up-to-date documentation ?

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] email notification from the TRAC systems

2007-03-10 Thread Sebastian Haase
Hi,
Myself and a friend of mine were surprised that we did not get emailed
 when someone commented on a bug reports we made on the numpy or scipy
TRAC bug tracker system.
(I started at some point adding an explicit CC entry for myself  :-(  )

I just googled for this, an I found that in a
trac.ini file   there should be this option:

[notification]
always_notify_owner = true

Why is this (or is it !?)  not used in numpy/scipy TRAC system ?

I think - since one has to sign up using an email address - that it's
naturally to expect that one would get comments copied by email.


-Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Teaching Array Languages

2007-03-10 Thread Sebastian Haase
Sorry for being so dense - what do the numbers mean ?
S.H.


On 3/10/07, Steven H. Rogers <[EMAIL PROTECTED]> wrote:
> Thanks to all who responded to my question about teaching array
> programming.  I've compiled a brief summary of the responses.
>
> NumPy
> =
> * Subject
>   - Physics/Astronomy 3
>   - Biotechnology 1
>   - Engineering 2
>   - Microeconomics 1
> * Level
>   - College/University 7
>
> J
> =
> * Subject
>   - Math 1
>   - Engineering 1
>   - Technical English
> * Level
>   - College/University 3
>   -
>
> Mathematica
> ===
> * Subject
>   - Math 1
> * Level
>   - College/University 1
>
> Matlab
> ==
> * Subject
>   - Engineering 1
>
> IDL
> ===
> * Subject
>   - Physics/Astronomy 2
> * Level
>   - College/University 2
>
> Discussion
> ==
> While this informal survey is neither precise nor comprehensive, I think
> it is interesting.  I queried the NumPy/SciPy and J mailing lists
> because those are the ones that I follow.  A more rigorous and broadly
> distributed survey may be in order.
>
> Regards,
> Steve
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with installing scipy

2007-03-14 Thread Sebastian Haase
On 3/2/07, nevin <[EMAIL PROTECTED]> wrote:
> I am trying to use scipy optimize module but I am having problem when
> I try to import it:
>
> >>> import numpy
> >>> import scipy
> >>> from scipy import optimize
> Traceback (most recent call last):
> File "", line 1, in ?
> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/
> python2.4/site-packages/scipy/optimize/__init__.py", line 7, in ?
>from optimize import *
> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/
> python2.4/site-packages/scipy/optimize/optimize.py", line 25, in ?
>import linesearch
> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/
> python2.4/site-packages/scipy/optimize/linesearch.py", line 3, in ?
>import minpack2
> ImportError: Inappropriate file type for dynamic loading
>
> How can I fix this problem? My system is Mac OSX Tiger- Pentium.
> Thanks.

Hi Nevin,

I got the same error message -- your scipy package is for non-Intel (PPC) !

You either have to recompile scipy yourself -- which requires a
working fortran compiler -- or hope that someone knows where to get a
pre-compiled version for Intel.
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] compiling numarrray on Cygwin

2007-03-14 Thread Sebastian Haase
Hi Duhaime,

are you having a *very* good reason that you want to compile numarray
instead of the newer and better numpy.
Essentially everyone here has switched either from numarray or from
numeric to numpy.
You won't get much help with numarray and problems might not ever get fixed.

However,  people here are very helpful - once you go with the new
numpy - to help converting any old code you might have.   Essentially
numarray and numpy (mostly,  98% ?) Python-code compatible  anyway !!!

Do the the switch and you will get help.

Regards,
Sebastian


On 3/6/07, Duhaime Johanne <[EMAIL PROTECTED]> wrote:
>
> I have problem to install numarray on cygwin. I have seen emails on that
> topic but I finally do not have a solution.
>
> Below is the error message for numarray-1.5.2 on python 2.4.3.  I am
> using numarray instead of numpy because I need the module to be
> compatible with a software : MAT (Model-based Analysis of Tiling-array).
>
>
> I have seen some solutions but I do not know exactly how to do since I
> am not familiar with python and cygwin. Someone suggests to disable the
> package that contains fetestexcept and fetestexcept. I do not know if
> this is important for my software but I am willing to try. The problem
> is that I do not know how to disable this package...
>
> The same person mention that it would put a "solution" in the csv
> repository. If I am right there is no csv repository for numarray.
> Should I look to the python csv. In fact I did that, but have not seen
> any comment related to my problem.
>
> As you can see my questions are very basic. Sorry about that. But I
> would really appreciate help to solve that problem. Thank you in
> advance for any help.
>
>
> -Compile:
>
> Building many extension but then with 'numarray.libnumarray' I have an
> error.
>(...)
>
>building 'numarray._ufuncComplex32' extension
>gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -IInclude/nu
>array -I/usr/include/python2.4 -c Src/_ufuncComplex32module.c -o
> build/temp.cyg
>in-1.5.24-i686-2.4/Src/_ufuncComplex32module.o
>gcc -shared -Wl,--enable-auto-image-base
> build/temp.cygwin-1.5.24-i686-2.4/Src/
>ufuncComplex32module.o -L/usr/lib/python2.4/config -lpython2.4
> -o build/lib.cyg
>in-1.5.24-i686-2.4/numarray/_ufuncComplex32.dll -lm -L/lib -lm
> -lc -lgcc -L/lib
>mingw -lmingwex
>building 'numarray._ufuncComplex64' extension
>gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -IInclude/nu
>array -I/usr/include/python2.4 -c Src/_ufuncComplex64module.c -o
> build/temp.cyg
>in-1.5.24-i686-2.4/Src/_ufuncComplex64module.o
>gcc -shared -Wl,--enable-auto-image-base
> build/temp.cygwin-1.5.24-i686-2.4/Src/
>ufuncComplex64module.o -L/usr/lib/python2.4/config -lpython2.4
> -o build/lib.cyg
>in-1.5.24-i686-2.4/numarray/_ufuncComplex64.dll -lm -L/lib -lm
> -lc -lgcc -L/lib
>mingw -lmingwex
>
>building 'numarray.libnumarray' extension
>
>gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall
> -Wstrict-prototypes -IInclude/num
>
>array -I/usr/include/python2.4 -c Src/libnumarraymodule.c -o
> build/temp.cygwin-1
>
>.5.24-i686-2.4/Src/libnumarraymodule.o
>
>gcc -shared -Wl,--enable-auto-image-base
> build/temp.cygwin-1.5.24-i686-2.4/Src/l
>
>ibnumarraymodule.o -L/usr/lib/python2.4/config -lpython2.4 -o
> build/lib.cygwin-1
>
>.5.24-i686-2.4/numarray/libnumarray.dll -lm -L/lib -lm -lc -lgcc
> -L/lib/mingw -l
>
>mingwex
>
>
> /lib/mingw/libmingwex.a(feclearexcept.o):feclearexcept.c:(.text+0x21):
> undefined
>
> reference to `___cpu_features'
>
>
> /lib/mingw/libmingwex.a(fetestexcept.o):fetestexcept.c:(.text+0x7):
> undefined re
>
>ference to `___cpu_features'
>
>collect2: ld returned 1 exit status
>
>error: command 'gcc' failed with exit status 1
>
>
>
>
> [EMAIL PROTECTED]
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Putting some random back into the top-level?

2007-03-14 Thread Sebastian Haase
Hi,
Please remind me what's wrong with pylab's
rand and randn !
I just learned about their existence recently and thought
they seem quite handy and should  go directly into (the top-level of) numpy.
Functions that have the same name and do the same thing don't conflict
either ;-)

-Sebastian



On 3/12/07, Rob Hetland <[EMAIL PROTECTED]> wrote:
>
> I, for one, would also like this.  Perhaps it should not be called
> 'rand', however, as that conflicts with the pylab rand.  (numpy load
> and pylab load also conflict -- probably the only reason I ever use
> import pylab as pl in my scripts).  'random' is already taken by the
> whole package... What does this leave that is still sensible?
>
> -r
>
> On Mar 9, 2007, at 2:01 PM, Bill Baxter wrote:
>
> > Has enough time passed with no top level random function that we can
> > now put one back in?
> > If I recall, the former top level rand() was basically removed because
> > it didn't adhere to the "shapes are always tuples" convention.
> >
> > Has enough time passed now that we can put something like it back in
> > the top level, in tuple-taking form?
> >
> > I think this is a function people use pretty frequently when writing
> > quick tests.
> > And numpy.random.random_sample seems a rather long and not so obvious
> > name for something that is used relatively frequently.
> >
> > --bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nonblocking Plots with Matplotlib

2007-03-14 Thread Sebastian Haase
Hey Bill,
what are you using to communicate with the server ?
May I recommend looking at Pyro !
(Python remote objects)
It would allow you to get your proxy objects.
And also handles exception super clean and easy.
I have used it for many years ! It's very stable !

(If you run into problems, take a look at the "one-way" calls to
ensure that functions, that would block, won't wait for the function
to return)

Just a thought  --
-Sebastian Haase


On 3/13/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
> Howdy Folks,
>
> I was missing the good ole days of using Matlab back at the Uni when I
> could debug my code, stop at breakpoints and plot various data without
> fear of blocking the interpreter process.
>
> Using "ipython -pylab" is what has been suggested to me in the past,
> but the problem is I don't do my debugging from ipython.  I have a
> very nice IDE that works very well, and it has a lovely interactive
> debugging prompt that I can use to probe my code when stopped at a
> breakpoint.  It's great except I can't really use matplotlib for
> debugging there because it causes things to freeze up.
>
> So I've come up with a decent (though not perfect) solution for
> quickie interactive plots which is to run matplotlib in a separate
> process.  I call the result it 'ezplot'.  The first alpha version of
> this is now available at the Cheeseshop.  (I made an egg too, so if
> you have setuptools you can do "easy_install ezplot".)
>
> The basic usage is like so:
>
>  In [1]: import ezplot
>  In [2]: p = ezplot.Plotter()
>  In [3]: p.plot([1,2,3],[1,4,9],marker='o')
>  Connecting to server... waiting...
>  connected to plotserver 0.1.0a1 on http://localhost:8397
>  Out[3]: True
>  In [4]: from numpy import *
>  In [5]: x = linspace(-5,5,20)
>  In [13]: p.clf()
>  Out[13]: True
>  In [14]: p.plot(x, x*x*log(x*x+0.01))
>
> (Imagine lovely plots popping up on your screen as these commands are typed.)
>
> The only return values you get back are True (success...probably) or
> False (failure...for sure).  So no fancy plot object manipulation is
> possible.  But you can do basic plots no problem.
>
> The nice part is that this (unlike ipython's built-in -pylab threading
> mojo) should work just as well from wherever you're using python.
> Whether it's ipython (no -pylab) or Idle, or a plain MS-DOS console,
> or WingIDE's debug probe, or SPE, or a PyCrust shell or whatever.  It
> doesn't matter because all the client is doing is packing up data and
> shipping over a socket.  All the GUI plotting mojo happens in a
> completely separate process.
>
> There are plenty of ways this could be made better, but for me, for
> now, this probably does pretty much all I need, so it's back to Real
> Work.  But if anyone is interested in making improvements to this, let
> me know.
>
> Here's a short list of things that could be improved:
> * Right now I assume use of the wxAGG backend for matplotlib.  Don't
> know how much work it would be to support other back ends (or how to
> go about it, really).   wxAGG is what I always use.
> * Returning more error/exception info from the server would be nice
> * Returning full fledged proxy plot objects would be nice too, but I
> suspect that's a huge effort
> * SOAP may be better for this than xmlrpclib but I just couldn't get
> it to work (SOAPpy + Twisted).
> * A little more safety would be nice.  Anyone know how to make a
> Twisted xmlrpc server not accept connections from anywhere except
> localhost?
> * There's a little glitch in that the spawned plot server dies with
> the parent that created it.  Maybe there's a flag to subprocess.Popen
> to fix that?
> * Sometimes when you click on "Exit Server", if there are plot windows
> open it hangs while shutting down.
>
>
> Only tested on Win32 but there's nothing much platform specific in there.
>
> Give it a try and let me know what you think!
>
> --bb
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Putting some random back into the top-level?

2007-03-14 Thread Sebastian Haase
On 3/14/07, Timothy Hochberg <[EMAIL PROTECTED]> wrote:
>
>
> On 3/14/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> > Hi,
> > Please remind me what's wrong with pylab's
> > rand and randn !
> > I just learned about their existence recently and thought
> > they seem quite handy and should  go directly into (the top-level of)
> numpy.
> > Functions that have the same name and do the same thing don't conflict
> > either ;-)
>
> I don't know what the problem, if any, is with rand and randn, but I can
> tell you what the problem with stuffing stuff in the main namespace is: it's
> allready much too crowded, which makes it difficult to find functions when
> you need them. Have you tried dir(numpy) recently?

Hey Tim,
yes, I have done this many times -- just to scare myself  ;-)
As I see it most of them are "historical problems" -- and we will
likely be stuck with them forever -- since the 1.0 commitment
apparently doesn't even allow to make numpy.resize  and array.resize
to fill in the same way [[ one adds 0s, the other repeats the array ]]
.
 (Especially I'm thinking of hanning and hamming and other things I
understand even less ...)

The only argument here, was that one or two [ :-) ]  random functions
[ how about rand() and randn() ?]
would be nice to have "as a shortcut"

Yes, I have some modules myself, containing a bunch of home-made
things, that I call "useful".
I understand the argument here was to get the "best possible" "common ground".

I don't have (very) strong feelings about this.
-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] rant against from numpy import * / from pylab import *

2007-03-15 Thread Sebastian Haase
Hi!
I use the wxPython PyShell.
I like especially the feature that when typing a module and then the
dot "." I get a popup list of all available functions (names) inside
that module.

Secondly,  I think it really makes code clearer when one can see where
a function comes from.

I have a default
import numpy as N
executed before my shell even starts.
In fact I have a bunch of my "standard" modules imported as .

This - I think - is a good compromise to the commonly used "extra
typing" and "unreadable"  argument.

a = sin(b) * arange(10,50, .1) * cos(d)
vs.
a = N.sin(b) * N.arange(10,50, .1) * N.cos(d)

I would like to hear some comments by others.


On a different note: I just started using pylab, so I did added an
automatic  "from matplotlib import pylab as P" -- but now P contains
everything that I already have in N.  It makes it really hard to
*find* (as in *see* n the popup-list) the pylab-only functions. --
what can I do about this ?


Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-22 Thread Sebastian Haase
On 3/22/07, Stefan van der Walt <[EMAIL PROTECTED]> wrote:
> On Thu, Mar 22, 2007 at 08:13:22PM -0400, Brian Blais wrote:
> > Hello,
> >
> > I'd like to concatenate a couple of 1D arrays to make it a 2D array, with 
> > two columns
> > (one for each of the original 1D arrays).  I thought this would work:
> >
> >
> > In [47]:a=arange(0,10,1)
> >
> > In [48]:a
> > Out[48]:array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
> >
> > In [49]:b=arange(-10,0,1)
> >
> > In [51]:b
> > Out[51]:array([-10,  -9,  -8,  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> >
> > In [54]:concatenate((a,b))
> > Out[54]:
> > array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
> >  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> >
> > In [55]:concatenate((a,b),axis=1)
> > Out[55]:
> > array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
> >  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> >
> >
> > but it never expands the dimensions.  Do I have to do this...
> >
> > In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
> > Out[65]:
> > array([[  0, -10],
> > [  1,  -9],
> > [  2,  -8],
> > [  3,  -7],
> > [  4,  -6],
> > [  5,  -5],
> > [  6,  -4],
> > [  7,  -3],
> > [  8,  -2],
> > [  9,  -1]])
> >
> >
> > ?
> >
> > I thought there would be an easier way.  Did I overlook something?
>
> How about
>
> N.vstack((a,b)).T
>
Also mentioned here should be the use of
newaxis.
As in
a[:,newaxis]

However I never got a "good feel" for how to use it, so I can't
complete the code you would need.

-Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-23 Thread Sebastian Haase
On 3/22/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
> On 3/23/07, Eric Firing <[EMAIL PROTECTED]> wrote:
> > Sebastian Haase wrote:
> > > On 3/22/07, Stefan van der Walt <[EMAIL PROTECTED]> wrote:
> > >> On Thu, Mar 22, 2007 at 08:13:22PM -0400, Brian Blais wrote:
> > >>> Hello,
> > >>>
> > >>> I'd like to concatenate a couple of 1D arrays to make it a 2D array, 
> > >>> with two columns
> > >>> (one for each of the original 1D arrays).  I thought this would work:
> > >>>
> > >>>
> > >>> In [47]:a=arange(0,10,1)
> > >>>
> > >>> In [48]:a
> > >>> Out[48]:array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
> > >>>
> > >>> In [49]:b=arange(-10,0,1)
> > >>>
> > >>> In [51]:b
> > >>> Out[51]:array([-10,  -9,  -8,  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> > >>>
> > >>> In [54]:concatenate((a,b))
> > >>> Out[54]:
> > >>> array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
> > >>>  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> > >>>
> > >>> In [55]:concatenate((a,b),axis=1)
> > >>> Out[55]:
> > >>> array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
> > >>>  -7,  -6,  -5,  -4,  -3,  -2,  -1])
> > >>>
> > >>>
> > >>> but it never expands the dimensions.  Do I have to do this...
> > >>>
> > >>> In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
> > >>> Out[65]:
> > >>> array([[  0, -10],
> > >>> [  1,  -9],
> > >>> [  2,  -8],
> > >>> [  3,  -7],
> > >>> [  4,  -6],
> > >>> [  5,  -5],
> > >>> [  6,  -4],
> > >>> [  7,  -3],
> > >>> [  8,  -2],
> > >>> [  9,  -1]])
> > >>>
> > >>>
> > >>> ?
> > >>>
> > >>> I thought there would be an easier way.  Did I overlook something?
> > >> How about
> > >>
> > >> N.vstack((a,b)).T
> > >>
> > > Also mentioned here should be the use of
> > > newaxis.
> > > As in
> > > a[:,newaxis]
> > >
> > > However I never got a "good feel" for how to use it, so I can't
> > > complete the code you would need.
> >
> > n [9]:c = N.concatenate((a[:,N.newaxis], b[:,N.newaxis]), axis=1)
> >
> > In [10]:c
> > Out[10]:
> > array([[  0, -10],
> > [  1,  -9],
> > [  2,  -8],
> > [  3,  -7],
> > [  4,  -6],
> > [  5,  -5],
> > [  6,  -4],
> > [  7,  -3],
> > [  8,  -2],
> > [  9,  -1]])
> >
>
> Then of course, there's r_ and c_:
>
> c = numpy.c_[a,b]
>
> c = numpy.r_[a[None],b[None]].T
>
> --bb
So,
None is the same as newaxis - right?

But what isa[None]   vs.  a[:,N.newaxis] ?

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] swig svn commits

2007-03-23 Thread Sebastian Haase
Hi,
this is regarding the svn commit by wfspotz.

Author: [EMAIL PROTECTED]
Date: 2007-03-23 13:04:37 -0500 (Fri, 23 Mar 2007)
New Revision: 3593

Modified:
  trunk/numpy/doc/swig/numpy.i
Log:
Added typecheck typemaps to INPLACE_ARRAY typemap suites

Hi wfspotz,
I was just wondering if you consider checking for "native byte order"
as part of your inplace-typemap.
I found that to be a problem in my SWIG type maps
that I hi-jacked / boiled down  from the older numpy swig files.
(They are mostly for 3D image data, only for a small number of types)

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array of Arrays

2007-03-23 Thread Sebastian Haase
On 3/23/07, Alexander Michael <[EMAIL PROTECTED]> wrote:
> On 3/23/07, Nadav Horesh <[EMAIL PROTECTED]> wrote:
> > How about
> >
> >  a = empty((5,7,4))
> >  c = a[...,-1]
>
> Solely because I want to use the array with code that assumes it is
> working with two-dimensional arrays but yet only performs operations
> on the "outer" two-dimensional array that would be consistent with an
> "inner" array type (i.e. scalar assignment, element-wise
> multiplication, etc.). I own all the code, so perhaps I can replace
> a[mask,-1] with a[mask,-1,...] and such. Hmm. Not bad reminder,
> thanks.

Hold on -- aren't the "..." at the *end* always implicit:
I you have
a.shape = (6,5,4,3)
a[3,2] is the same as a[3,2,:,:]  is the same as a[3,2,...]

only if you wanted a[...,3,2]   you would have to change your code !?
Am I confused !?


-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] swig svn commits

2007-03-24 Thread Sebastian Haase
On 3/24/07, Bill Spotz <[EMAIL PROTECTED]> wrote:
> No, I don't consider native byte order.  What was your solution?
>
I think there is only one solution:
If somebody requests INPLACE handling but provides data of
wrong byteorder,  I have to throw an exception -- in SWIG terms, the
typecheck returns False (if I remember right).

For me this is just a "last-line of defence"  - meaning that I have
most of my functions wrapped by another level of python "convenience"
functions, and those take care of type and byte-order issues
beforehand as needed.

-Sebastian


> The typecheck typemap is so that swig can perform isolated type
> checking when it is creating dispatch code for overloaded functions.
> The typechecks I added for INPLACE arrays require that the argument
> be a numpy array and that PyArray_EquivTypenums() return true for the
> provided and requested types.
>
> On Mar 23, 2007, at 3:03 PM, Sebastian Haase wrote:
>
> > Hi,
> > this is regarding the svn commit by wfspotz.
> >
> > Author: [EMAIL PROTECTED]
> > Date: 2007-03-23 13:04:37 -0500 (Fri, 23 Mar 2007)
> > New Revision: 3593
> >
> > Modified:
> >   trunk/numpy/doc/swig/numpy.i
> > Log:
> > Added typecheck typemaps to INPLACE_ARRAY typemap suites
> >
> > Hi wfspotz,
> > I was just wondering if you consider checking for "native byte order"
> > as part of your inplace-typemap.
> > I found that to be a problem in my SWIG type maps
> > that I hi-jacked / boiled down  from the older numpy swig files.
> > (They are mostly for 3D image data, only for a small number of types)
> >
> > -Sebastian
> > ___
> > Numpy-discussion mailing list
> > Numpy-discussion@scipy.org
> > http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
> ** Bill Spotz  **
> ** Sandia National Laboratories  Voice: (505)845-0170  **
> ** P.O. Box 5800 Fax:   (505)284-5451  **
> ** Albuquerque, NM 87185-0370Email: [EMAIL PROTECTED] **
>
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New Operators in Python

2007-03-24 Thread Sebastian Haase
On 3/24/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
>
> On 3/24/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> > Every so often the idea of new operators comes up because of the need to
> > do both "matrix-multiplication" and element-by-element multiplication.
> >
> > I think this is one area where the current Python approach is not as
> > nice because we have a limited set of operators to work with.
> >
> > One thing I wonder is if we are being vocal enough with the Python 3000
> > crowd to try and get additional operators into the language itself.
> >
> > What if we could get a few new operators into the language to help us.
> > If we don't ask for it, it certainly won't happen.
> > My experience is that the difficulty of using the '*' operator for both
> > matrix multiplication and element-by-element multiplication depending on
> > the class of the object is not especially robust.  It makes it harder to
> > write generic code, and we still haven't gotten everything completely
> > right.
> >
> > It is somewhat workable as it stands, but I think it would be nicer if
> > we could have some "meta" operator that allowed an alternative
> > definition of major operators.   Something like @*  for example (just
> > picking a character that is already used for decorators).
>
> Yes indeed, this is an old complaint. Just having an infix operator would be
> an improvement:
>
> A dot B dot C
>
> Not that I am suggesting dot in this regard ;) In particular, it wouldn't
> parse without spaces. What about division? Matlab has both / and \ for left
> and right matrix division and something like that could call solve instead
> of inverse, leading to some efficiencies. We also have both dot and
> tensordot, which raises the problem of interpretation when ndim > 2.
>
> Chuck
>
I understand the convenience of more infix operators. And I sometimes
think one should just be able to define new ones at will 

On the other hand, I'm now playing the devil's advocate:
A "math specific" language like Matlab has obviously an overwhelming
need for a second set of matrix/array operators.  However, a language
-- as broadly used as -- Python might be just better off having a
simple, concise and limited set of infix operators.   I assume that
this is the official argument.

I got especially "worried" when being remember of the "\"
right-to-left  division operator. (As I said, it very useful to have
in Matlab, and I wish sometimes we could add things like this).
It is just important to keep the "con" - argument clearly in mind.
I hope this helps the discussion.

- Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question

2007-03-26 Thread Sebastian Haase
On 3/26/07, Alan G Isaac <[EMAIL PROTECTED]> wrote:
> > Alan G Isaac schrieb:
> >> What feels wrong: iterating over a container does not give
> >> access to the contained objects.  This is not Pythonic.
>
> On Mon, 26 Mar 2007, Sven Schreiber apparently wrote:
> > If you iterate over the rows of the matrix, it feels
> > natural to me to get the row vectors
>
> Natural in what way?
> Again, I am raising the question of what
> would be expected from someone familiar with Python.
> Abstractly, what do you expect to get when you iterate
> over a container?  Seriously.
>
>
> > But I admit I'm a 2d fan so much so that I didn't even
> > know that using a single index is possible with a matrix.
>
> Exactly.  When you want submatrices, you expect to index for
> them.  EXACTLY!!
>
If may chime in...
I think Sven's argument in on the side saying,
A "matrix" is an object that you expect a certain (mathematical !)
behavior from.
If some object behaves intuitively right -- that's ultimately pythonic !
The clash is, NOT to see a matrix  "just as another container".
Instead a matrix is a mathematical object , that has rows and columns.
It is used in a field (lin-alg) where every vector is either a row or
a column vector -- apparently that's big thing ;-)
The whole reason to add a special matrix class to numpy in the first
place, is to provide a better degree of convenience to lin-alg related
applications.  I would argue that it was just not consistently
considered, that this should also come with "a column of a matrix is
something else than a row  -- (1,n) vs. (n,1)  and not (n,)

more notes/points:
a) I have never heard about the m.A1 - what is it ?
b) I don't think that if m[1] would return a (rank 2) matrix, that
m[1].A could return a (rank 1) array ...
c) I'm curious if there is a unique way to extend the matrix class
into 3D or ND.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Best way to run python parallel

2007-03-29 Thread Sebastian Haase
Hi,
What is the general feeling towards  BSP on this list !?
I remeber Konrad Hinsen advertising it on the SciPy workshop '03 .
It is supposed to be much simpler to use than MPI, yet still powerful
and flexible enough for most all applications.
It is part of Konrad's ScientificPython ( != SciPy )

Some links are here:
http://www.bsp-worldwide.org/
http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel

Evaluating Scientific Python/BSP on selected parallel computers
http://ove.nipen.no/diplom/

http://dirac.cnrs-orleans.fr/plone/software/scientificpython/

- Sebastian Haase



On 3/29/07, Peter Skomoroch <[EMAIL PROTECTED]> wrote:
>
>
> If you want to use PyMPI or PyPar, I'm writing a series of tutorials on how to
> get them running on Amazon EC2,
>
> http://www.datawrangling.com/on-demand-mpi-cluster-with-python-and-ec2-part-1-of-3.html
>
>
> I'm using PyMPI on a 20 node EC2 cluster and everything seems groovy, but I'm
> relatively new to MPI, so I have probably overlooked some easier solutions.
>
> Any feedback on the writeups from Python gurus would be appreciated.
>
> -Pete
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ticket #450

2007-03-31 Thread Sebastian Haase
Hi!
Could someone comment on ticket #450
[Numpy-tickets] [NumPy] #450: Make a.min() not copy data

It seems odd that calculating the min of an array causes a memory error.
The demonstrated data array is a 3D cube of uint8 and about 1GB in size.
even  creation of an extra (2d) array when reducing the 1st dimension,
would only need about 1MB -- that can not really explain the memory
error.

Thanks,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] comments to ticket #454 !? : Importing numpy prevents decrementing references for local variables

2007-03-31 Thread Sebastian Haase
Another rather old ticket is this one:

http://projects.scipy.org/scipy/numpy/ticket/454>

Any comments on this !?

Thanks,
-Sebastian


On 2/16/07, NumPy <[EMAIL PROTECTED]> wrote:
> #454: Importing numpy prevents decrementing references for local variables
> +---
>  Reporter:  goddard |   Owner:  somebody
> Type:  defect  |  Status:  new
>  Priority:  normal  |   Milestone:
> Component:  numpy.core  | Version:  1.0.1
>  Severity:  major   |Keywords:
> +---
>  If the first import of numpy is within a function it prevents that
>  function's
>  local variables and all locals in higher call frames that led to the
>  import
>  from being deleted.  This can prevent freeing up large numpy arrays.
>
>  The problem is that numpy __init__.py imports numpy _import_tools.py
>  and creates a PackageLoader object that saves the parent frame
>
>self.parent_frame = frame = sys._getframe(1)
>
>  Saving the parent frame results in saving the entire call stack even after
>  all those functions have returned.
>
>  I've attached Python code illustrating the problem.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about standalone small software and teaching

2007-04-04 Thread Sebastian Haase
Is enthought now defaulting to numpy ?

-Sebastian


On 4/4/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > --- Discussion of Numerical Python  > [EMAIL PROTECTED]
> > wrote:
>
> >>> If I get the latest
> > SVN of the enthought tool suite, go to enthought/src/lib/enthought/traits,
> >
> >>> and build with
> >>>
> >>> python setup.py build_src build_clib build_ext
> > --inplace
> >>>
> >>> as suggested in the enthought wiki, and then add
> > enthought/src/lib to my
> >>> PYTHONPATH, then your snippet fails with
> >>>
> >
> >>> --- begin error message ---
> >>>
> >>> Traceback (most recent call last):
> >
> >>>   File "prova.py", line 5, in ?
> >>>
> >>> class Camera(HasTraits):
> >
> >>> NameError: name 'HasTraits' is not defined
> >> Hmm, it works for me.
> > Are you sure that your build is being correctly picked up?
> >> Import enthought,
> > then print enthought.__file__.
> >
> > Yes, it is picking up the right one. I assume
> > I can run the setup.py in enthought/src/lib/enthought/traits to get only 
> > traits,
> > right? I'm not installing scipy, or anything else.
>
> Ah, sorry, I missed the bit where you said you only built inside
> enthought/traits/. I'd build the whole suite. It'll take a bit, building the
> extension modules for Kiva, but nothing too bad. I don't know why you'd get 
> the
> error, though. enthought.traits.api should have HasTraits.
>
> --
> Robert Kern
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about standalone small software and teaching

2007-04-04 Thread Sebastian Haase
Hello Gael,

Short question regarding your tutorial -- I'm very intrigued by traits
and would like to use them too 
Why do you define e.g. the Point class like this:
class Point(object):
""" 3D Point objects """
x = 0.
y = 0.
z = 0.

and not like this:
class Point(object):
""" 3D Point objects """
def __init__(self):
   self.x = 0.
   self.y = 0.
   self.z = 0.

I thought in the first case, if one did "a = Point(); a.x = 6"  that
from then on ANY new point ( "b = Point()" ) would be created with b.x
being 6 -- because 'x' is a class attribute   and nor a instance
attribute  !?

This is obviously a beginners question - and I'm hopefully missing something.

Thanks,
Sebastian Haase




On 4/3/07, Gael Varoquaux <[EMAIL PROTECTED]> wrote:
> You can do a script with a GUI front end, as described in the first
> chapter of my tutorial
> http://gael-varoquaux.info/computers/traits_tutorial/traits_tutorial.html
> . You can also build a complete interactive application, as described in
> the rest of the tutorial, but this is more work.
>
> If you have more questions about this approach feal free to ask.
>
> Gaël
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about standalone small software and teaching

2007-04-04 Thread Sebastian Haase
On 4/4/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > Hello Gael,
> >
> > Short question regarding your tutorial -- I'm very intrigued by traits
> > and would like to use them too 
> > Why do you define e.g. the Point class like this:
> > class Point(object):
> > """ 3D Point objects """
> > x = 0.
> > y = 0.
> > z = 0.
> >
> > and not like this:
> > class Point(object):
> > """ 3D Point objects """
> > def __init__(self):
> >self.x = 0.
> >self.y = 0.
> >self.z = 0.
> >
> > I thought in the first case, if one did "a = Point(); a.x = 6"  that
> > from then on ANY new point ( "b = Point()" ) would be created with b.x
> > being 6 -- because 'x' is a class attribute   and nor a instance
> > attribute  !?
>
> No, setting "a.x = 6" will set it on the instance, not the class.

OK, but what is "wrong" with the first way !?  I mean,  it somehow
seems not like "it's usually done" in Python ?  Normally there is
always a __init__(self) that sets up everything referring to self --
why is this tutorial doing it differently ?

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Big list of Numpy & Scipy users

2007-04-04 Thread Sebastian Haase
Hi,
Why do you call it
Scipy_Projects
if it also lists people/project who use (only) numpy.

I wish I could suggest a better name ...
I just checked the swig.org web site;  the call it just
"projects"   ( http://www.swig.org/projects.html )
[ Open source projects using SWIG ]
so maybe just leaving out the "Scipy_" part.

BTW,  do peer review papers count !?  I have two of them, using numpy
(originally numarray, but now it's numpy)

Maybe the projects should be in categories:
- open source
- commercial   (?)
- papers
- ??

-Sebastian




On 4/4/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
> On 4/4/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > Bill Baxter wrote:
> > > Is there any place on the Wiki that lists all the known software that
> > > uses Numpy in some way?
> > >
> >> > It would be nice to start collecting such a list if there isn't one
> > > already.  Screenshots would be nice too.
> >
> > There is no such list that I know of, but you may start one on the wiki if 
> > you like.
>
> Ok, I made a start:  http://www.scipy.org/Scipy_Projects
> Anyone who has a project that depends on Numpy or Scipy, please go add
> your info there!
>
> I haven't linked it from anywhere, because it looks pretty pathetic
> right now with only three or four entries.  But hopefully everyone
> will jumps in and add their project to the list.
>
> Part of the idea is that this should be a good place to point
> nay-sayers to when they say "meh - numpy... that's a niche project for
> a handful of scientists."
>
> So ... hopefully a good portion of the links will be things other than
> science projects.  There will hopefully be a lot of things that
> "ordinary users" would care about.  :-)
>
> I couldn't figure out how to add an image, but if someone knows how to
> do that, please do.
>
> --bb
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] basic python questions

2007-04-04 Thread Sebastian Haase
On 4/4/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
> On 4/5/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > Bill Baxter wrote:
> > > Ok, I got another hopefully easy question:
> > >
> > > Why this:
> > > class Point(object):
> > >   ...
> > >
> > > Instead of the style that's used in the Python tutorial in the
> > > 'classes' chapter:
> > > class Point:
> > > ...
> >
> > Because the former make new-style classes and the latter make old-style 
> > classes.
> > It's not an issue of personal preference: they are somewhat different object
> > models and there are things that old-style classes can't do. As HasTraits is
> > also a new-style class, there's no point in using old-style classes in this
> > tutorial.
>
> What's the difference in the object models?  I'm surprised that the
> Python tutorial seems to be completely silent on this issue.
> (http://docs.python.org/tut/node11.html)
>
Not really answering your question -- but I have complained about that
tutorial before, with regards to new language features ... it does not
mention
from __future__ import division
In my mind this should be put at the very front - because it's going
to be a very big thing once Python 3000 comes around.
The Python-list people did not like my arguing because apparently the
tutorial is supposed to "look nice"   [[ don't get me wrong,  I
really recommend the tutorial, I like it, I think it's good ]]
But some (even if) ugly things should be said up front, if they clear
up the way.
Python 3000 will also default to new-style classes -- so that
"(object)" thing would go away again if I'm not mistaken.

-Sebastian

PS:
Maybe this list could officially endorse the
from __future__ import division
I would be very interested in this !
Math becomes clearer, and  things like  arr[5/2]  won't only suddenly
fail in the future,  they should be NOW written as arr[5//2] (if you
need integer division)
Thanks. [[ please start a new thread, and put up a page on the wiki ;-) ]]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

2007-04-17 Thread Sebastian Haase
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
 -- quick note:
you are mixing tabs and spaces  :-(
what editor are you using !?

-Sebastian



On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 17/04/07, Lou Pecora <[EMAIL PROTECTED]> wrote:
> > I get what you are saying, but I'm not even at the
> > Stupidly Easy Parallel level, yet.  Eventually.
>
> Well, it's hardly wonderful, but I wrote a little package to make idioms like:
>
> d = {}
> def work(f):
>d[f] = sum(exp(2.j*pi*f*times))
> foreach(work,freqs,threads=3)
>
> work fine.
>
> Of course you need to make sure your threads don't accidentally
> trample all over each other, but otherwise it's an easy way to get a
> factor-of-two speedup.
>
> Anne
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

2007-04-17 Thread Sebastian Haase
On 4/17/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Matthieu Brucher wrote:
> > I would say that if the underlying atlas library is multithreaded, numpy
> > operations will be as well. Then, at the Python level, even if the
> > operations take a lot of time, the interpreter will be able to process
> > threads, as the lock is freed during the numpy operations - as I
> > understood for the last mails, only one thread can access the
> > interpreter at a specific time -
>
> ATLAS doesn't *underlie* much of numpy at all. Just dot() and the functions in
> linalg, nothing else.
>
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could*  or *should*  be implemented using ATLAS !?
Any ?

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

2007-04-18 Thread Sebastian Haase
On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > Sebastian Haase wrote:
> >
> > > Hi,
> > > I don't know much about ATLAS -- would there be other numpy functions
> > > that *could*  or *should*  be implemented using ATLAS !?
> > > Any ?
> >
> > Not really, no.
>
> ATLAS is a library designed to implement linear algebra functions
> efficiently on many machines. It does things like reorder the
> multiplications and additions in matrix multiplication to make the
> best possible use of your cache, as measured by empirical testing.

So, this means that 'matrixmultiply'  could / should be using ATLAS
for the same reason as 'dot' does - right ?

-Seb.

> (FFTW does something similar for the FFT.) But ATLAS is only designed
> for linear algebra. If what you want to do is linear algebra, look at
> scipy for a full selection of linear algebra routines that make fairly
> good use of ATLAS where applicable.
>
> It would be perfectly possible, in principle, to implement an
> ATLAS-like library that handled a variety (perhaps all) of numpy's
> basic operations in platform-optimized fashion. But implementing ATLAS
> is not a simple process! And it's not clear how much gain would be
> available - it would almost certainly be noticeably faster only for
> very large numpy objects (where the python overhead is unimportant),
> and those objects can be very inefficient because of excessive
> copying. And the scope of improvement would be very limited; an
> expression like A*B+C*D would be much more efficient, probably, if the
> whole expression were evaluated at once for each element (due to
> memory locality and temporary allocation). But it is impossible for
> numpy, sitting inside python as it does, to do that.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

2007-04-18 Thread Sebastian Haase
On 4/18/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> >> On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> >>> Sebastian Haase wrote:
> >>>
> >>>> Hi,
> >>>> I don't know much about ATLAS -- would there be other numpy functions
> >>>> that *could*  or *should*  be implemented using ATLAS !?
> >>>> Any ?
> >>> Not really, no.
> >> ATLAS is a library designed to implement linear algebra functions
> >> efficiently on many machines. It does things like reorder the
> >> multiplications and additions in matrix multiplication to make the
> >> best possible use of your cache, as measured by empirical testing.
> >
> > So, this means that 'matrixmultiply'  could / should be using ATLAS
> > for the same reason as 'dot' does - right ?
>
> matrixmultiply() is just a long-deprecated alias to dot().
Of course - I should have turn my brain on before hitting 'send'
Does ATLAS/BLAS do anything special for element wise multiplication
and alike - if for example the data is not aligned or not contiguous?

-Seb.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API creating new copy of C data

2007-04-21 Thread Sebastian Haase
On 4/19/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
> What's the right way to make a new numpy array that's a copy of some C data?
>
> There doesn't seem to be any API like PyArray_NewFromDescr that
> /copies/ the void*data pointer for you.  Do I have to write my own
> loops for this?  I can do that, it just seems like it should be a
> library function already, so I'm guessing I'm just overlooking it.
> There seem to be lots of APIs that will wrap pre-existing memory, but
> the ones that allocate for you do not seem to copy.
>
> A related question -- I'm only trying to copy in order to save myself
> a little hassle regarding how to clean up the allocated chunks.  If
> there's some simple way to trigger a particular deallocation function
> to be called at the right time, then that would be the ideal, really.
> Does that exist?

This is a situation I have been waiting to address for a long time !
In our case the data size is generally considered to be to large to
accept the copy-solution.
A cookbook-wiki entry would be wonderful!
Generally we would have data memory that was allocated via alloc() and
data that was allocated via new[] -- so two different deallocation
functions (free()  and delete[], respectively) would be required for
this to be trigged, once the reference counter goes back to zero.

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API creating new copy of C data

2007-04-21 Thread Sebastian Haase
On 4/21/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Bill Baxter wrote:
> > What's the right way to make a new numpy array that's a copy of some C data?
> >
> > There doesn't seem to be any API like PyArray_NewFromDescr that
> > /copies/ the void*data pointer for you.  Do I have to write my own
> > loops for this?  I can do that, it just seems like it should be a
> > library function already, so I'm guessing I'm just overlooking it.
> > There seem to be lots of APIs that will wrap pre-existing memory, but
> > the ones that allocate for you do not seem to copy.
> >
>
> What do you mean by /copies/ the void * data pointer for you?   Do you
> mean the API would
>
> 1) Create new memory for the array
> 2) Copy the data from another void * pointer to the memory just created
> for the new array?
>
> If that is what you mean, then you are right there is no such API.   I'm
> not sure that there needs to be one.  It is a two-liner using memcpy.

Yes, I was thinking about memcpy() -- but how about non-contiguous
data ? or other non well behaved ndarrays (non-aligned, byte-swapped,
...?) ?

>
> > A related question -- I'm only trying to copy in order to save myself
> > a little hassle regarding how to clean up the allocated chunks.  If
> > there's some simple way to trigger a particular deallocation function
> > to be called at the right time, then that would be the ideal, really.
> >
> No, there is no place to store that information in NumPy.  Either the
> ndarray dealloc function frees the memory it created or it doesn't free
> any memory.I think the best thing to do in this case would be to
> create a memory object wrapping the pointer and then point the ndarray
> to it as the source of memory.
>
Yes, I think one would probably want to create a custom-made python
class that provides the memory as buffer or so. This is what you meant
- right ?  And that class could then define /any/ function to be
called once the ref.count goes to zero - right?
Could someone with C-API knowledge put a sample together !?   This
would also be quite useful to be used with a SWIG output typemap.

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] arr.dtype.byteorder == '=' --- is this "good code"

2007-06-25 Thread Sebastian Haase
Hi,
Suppose I'm on a little-edian system.
Could I have a little-endian numpy array arr, where
arr.dtype.byteorder
would actually be "<"
instead of  "=" !?

There are two kinds of systems: little edian and big endian.
But there are three possible byteorder values: "<", ">" and "="

I assume that if arr.dtype.byteorder is "="
then, even on a little endian system
the comparison arr.dtype.byteorder == "<"  still fails !?
Or are the == and != operators overloaded !?

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arr.dtype.byteorder == '=' --- is this "good code"

2007-07-02 Thread Sebastian Haase
any comments !?

On 6/25/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> Hi,
> Suppose I'm on a little-edian system.
> Could I have a little-endian numpy array arr, where
> arr.dtype.byteorder
> would actually be "<"
> instead of  "=" !?
>
> There are two kinds of systems: little edian and big endian.
> But there are three possible byteorder values: "<", ">" and "="
>
> I assume that if arr.dtype.byteorder is "="
> then, even on a little endian system
> the comparison arr.dtype.byteorder == "<"  still fails !?
> Or are the == and != operators overloaded !?
>
> Thanks,
> Sebastian Haase
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   5   6   7   8   9   >