[Numpy-discussion] Ticket closing policy

2011-03-24 Thread David
Hi Mark, hi all,

I noticed you did a lot of cleaning in the bug trackers, thank you for 
helping there, this is sorely needed.

However, I noticed quite a few tickets were closed as wontfix even 
though they are valid. I understand the concern of getting many 
languishing tickets, but one should not close valid tickets either. 
Also, in my experience, issues specific to win32 should not be closed 
because they work on Linux - I remember the fastputmask issue was still 
there not so long ago (but could not understand what was going on, 
unfortunately).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Numpy 1.6.0 beta 1

2011-03-31 Thread David
On 03/31/2011 06:37 PM, Pearu Peterson wrote:
>
>
> On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau  <mailto:courn...@gmail.com>> wrote:
>
> On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen  <mailto:ro...@uw.edu>> wrote:
>  > In article
>  >  <mailto:eeg8kl7639imrtl-ihg1ncqyolddsid5tf...@mail.gmail.com>>,
>  >  Ralf Gommers  <mailto:ralf.gomm...@googlemail.com>> wrote:
>  >
>  >> Hi,
>  >>
>  >> I am pleased to announce the availability of the first beta of NumPy
>  >> 1.6.0. Due to the extensive changes in the Numpy core for this
>  >> release, the beta testing phase will last at least one month. Please
>  >> test this beta and report any problems on the Numpy mailing list.
>  >>
>  >> Sources and binaries can be found at:
>  >> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
>  >> For (preliminary) release notes see below.
>
> I see a segfault on Ubuntu 64 bits for the test
> TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
> Am I the only one seeing it ?
>
> The test work here ok on Ubuntu 64 with numpy master. Could you try the
> maintenance/1.6.x branch where the related bugs are fixed.

I did test that as well, and got the same issue, but could not reproduce 
it on another machine. I do get the error every time on my main work 
machine, though. I will look more into it, but it is most likely 
something due to my machine,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New functions.

2011-05-31 Thread David
On 06/01/2011 10:08 AM, Charles R Harris wrote:
> Hi All,
>
> I've been contemplating new functions that could be added to numpy and
> thought I'd run them by folks to see if there is any interest.
>
> 1) Modified sort/argsort functions that return the maximum k values.
>  This is easy to do with heapsort and almost as easy with mergesort.
>
> 2) Ufunc fadd (nanadd?) Treats nan as zero in addition. Should make a
> faster version of nansum possible.
>
> 3) Fast medians.

+1 for fast median as well, and more generally fast "linear" (O(kN)) 
order statistics would be nice.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New functions.

2011-05-31 Thread David
On 06/01/2011 10:34 AM, Charles R Harris wrote:
>
>
> On Tue, May 31, 2011 at 7:33 PM, David  <mailto:da...@silveregg.co.jp>> wrote:
>
> On 06/01/2011 10:08 AM, Charles R Harris wrote:
>  > Hi All,
>  >
>  > I've been contemplating new functions that could be added to
> numpy and
>  > thought I'd run them by folks to see if there is any interest.
>  >
>  > 1) Modified sort/argsort functions that return the maximum k values.
>  >  This is easy to do with heapsort and almost as easy with
> mergesort.
>  >
>  > 2) Ufunc fadd (nanadd?) Treats nan as zero in addition. Should make a
>  > faster version of nansum possible.
>  >
>  > 3) Fast medians.
>
> +1 for fast median as well, and more generally fast "linear" (O(kN))
> order statistics would be nice.
>
>
> OK, noob question. What are order statistics?

In statistics, order statistics are statistics based on sorted samples, 
median, min and max being the most common:

http://en.wikipedia.org/wiki/Order_statistic

Concretely here, I meant a fast way to compute any rank of a given data 
set, e.g. with the select algorithm. I wanted to do that for some time, 
but never took the time for it,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Added atleast_nd, request for clarification/cleanup of atleast_3d

2016-07-06 Thread David
Joseph Fox-Rabinovitz  
gmail.com> writes:

> 
> On Wed, Jul 6, 2016 at 2:57 PM, Eric 
Firing  hawaii.edu> wrote:
> > On 2016/07/06 8:25 AM, Benjamin Root 
wrote:
> >>
> >> I wouldn't have the keyword be 
"where", as that collides with the notion
> >> of "where" elsewhere in numpy.
> >
> >
> > Agreed.  Maybe "side"?
> 
> I have tentatively changed it to "pos". 
The reason that I don't like
> "side" is that it implies only a subset 
of the possible ways that that
> the position of the new dimensions can 
be specified. The current
> implementation only puts things on one 
side or the other, but I have
> considered also allowing an array of 
indices at which to place new
> dimensions, and/or a dictionary keyed by 
the starting ndims. I do not
> think "side" would be appropriate for 
these extended cases, even if
> they are very unlikely to ever 
materialize.
> 
> -Joe
> 
> > (I find atleast_1d and atleast_2d to 
be very helpful for handling inputs, as
> > Ben noted; I'm skeptical as to the 
value of atleast_3d and atleast_nd.)
> >
> > Eric
> >
> > 
__
_
> > NumPy-Discussion mailing list
> > NumPy-Discussion  scipy.org
> > 
https://mail.scipy.org/mailman/listinfo/nu
mpy-discussion
> 


About `order='C'` or `order='F'` for the 
argument name?



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build system questions for use in another project (fwrap)

2010-04-09 Thread David
On 04/07/2010 11:52 AM, Kurt Smith wrote:
> Briefly, I'm encountering difficulties getting things working in numpy
> distutils for fwrap's build system.
>
> Here are the steps I want the build system to accomplish:
>
> 1) Compile a directory of Fortran 90 source code -- this works.
>  - The .mod files generated by this compilation step are put in the
> build directory.

This is difficult - fortran modules are a PITA from a build perspective. 
Many compilers don't seem to have a way to control exactly where to put 
the generated .mod, so the only way I am aware of to control this is to 
cwd the process into the build directory...

This was also a problem when I worked on fortran support for waf (see 
http://groups.google.com/group/waf-users/browse_thread/thread/889e2a5e5256e420/84ee939e93c9e30f?lnk=gst&q=fortran+modules#84ee939e93c9e30f
 
)

>
> My problem is in instantiating numpy.distutils.config such that it is
> appropriately configured with command line flags.
>
> I've tried the following with no success:
>
> ('self' is a build_ext instance)
> cfg = self.distribution.get_command_obj('config')
> cfg.initialize_options()
> cfg.finalize_options()  # doesn't do what I hoped it would do.
>
> This creates a config object, but it doesn't use the command line
> flags (e.g. --fcompiler=gfortran doesn't affect the fortran compiler
> used).

Why don't you do the testing in config ? That's how things are done 
normally, unless you have a reason to do otherwise. Concerning the 
--fcompiler option, how do you pass --fcompiler (what is the exact list 
of commands you used to call distutils here) ? Generally, given the 
general command/option screw up, the only real solution really is to 
pass the options to each command, and hope each one is the same.

> Any pointers?  More generally -- seeing the above, any ideas on how to
> go about doing what I'm trying to do better?

Not really, that's how you are supposed to do things with distutils,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build system questions for use in another project (fwrap)

2010-04-11 Thread David
On 04/10/2010 03:02 AM, Kurt Smith wrote:
> On Fri, Apr 9, 2010 at 2:25 AM, David  wrote:
>> On 04/07/2010 11:52 AM, Kurt Smith wrote:
>>> Briefly, I'm encountering difficulties getting things working in numpy
>>> distutils for fwrap's build system.
>>>
>>> Here are the steps I want the build system to accomplish:
>>>
>>> 1) Compile a directory of Fortran 90 source code -- this works.
>>>   - The .mod files generated by this compilation step are put in the
>>> build directory.
>>
>> This is difficult - fortran modules are a PITA from a build perspective.
>> Many compilers don't seem to have a way to control exactly where to put
>> the generated .mod, so the only way I am aware of to control this is to
>> cwd the process into the build directory...
>
>> From what I can tell, numpy's distutils already takes care of putting
> the .mod files in the build directory (at least for gfortran and
> ifort) by manually moving them there -- see
> numpy.distutils.command.build_ext.build_ext.build_extension, after the
> line "if fmodule_source".  This is fine for my purposes.  All I need
> is the project's .mod files put in one place before the next step.

Moving files is not good for various reasons - I think it contribute to 
making the build more fragile, and it may cause race conditions for // 
builds (I am not sure about that last point, I think it is also OS 
dependent). I agree that it is not the main worry at that point, though.

>
> I could make it a requirement that the user supply the .mod files to
> fwrap before calling 'python setup.py build_ext', but that might be a
> big mess when you take into account compiler flags that modify default
> type sizes, etc.  Maybe it would work fine; I'm not sure whether
> different compiler flags used by distutils would change the .mod files
> and break the type info, though. And the source code would be compiled
> twice -- once to get the .mod files, and again when distutils compiles
> everything with the right flags which would be suboptimal for large
> projects.
>
> (Note: I've looked into this a bit, and it seems like it might work
> alright -- you can use '-fsyntax-only' in gfortran and g95, and
> '-syntax-only' in ifort, for example, and that generates the .mod
> files.  I'll look into it some more...)
>
> It would be a PITA to test for every kind-type-parameter that someone
> might use in the config stage.  As you probably know, Fortran 9x
> allows types like this:
>
> real(kind=selected_real_kind(10,100)) :: ffed_real
>
> Which means I'd have to do exhaustive testing for every combination of
> arguments to selected_real_kind() (up to some limit).  This would be
> *extremely* slow.

I did not know about this feature, but I think it can be solved 
relatively easily with type checking, and it would not be that slow if 
you allow "expected size". Also, the code for this is already there in 
numpy/distutils/commands/config.py, and is relatively independent of the 
rest of distutils (you only need to give _compile).

>
>> normally, unless you have a reason to do otherwise. Concerning the
>> --fcompiler option, how do you pass --fcompiler (what is the exact list
>> of commands you used to call distutils here) ? Generally, given the
>> general command/option screw up, the only real solution really is to
>> pass the options to each command, and hope each one is the same.
>>
>
> The commandline is 'python setup.py build_ext --inplace
> --fcompiler=gfortran' (or --fcompiler=intelem).  When I pass
> --fcompiler=gfortran, the try_compile() command ignores it, searches
> for a fortran compiler and finds ifort instead and uses it, which
> breaks everything.

Yes, only build_ext knows about --fcompiler, and try_compile comes from 
configure command. That's one aspect of the utterly broken command 
design of distutils, that's why you need to say something like python 
setup.py config --fcompiler=gfortran build_ext --fcompiler=gfortran.

>
> Is there an example in numpy.distutils that shows how to pass the
> commandline options to each command?

It is not possible AFAIK - you only know about --fcompiler in 
build_ext.finalize_options call, and at that point, config may already 
have run.

>
> I'm thinking of rolling my own 'try_compile()' function as part of a
> custom build_ext command class that suits my purposes.  We'll see how
> far that gets me.

That's another solution.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] rc2 for NumPy 1.4.1 and Scipy 0.7.2

2010-04-12 Thread David
On 04/12/2010 06:03 PM, Nadav Horesh wrote:
>
> Tried of install numy-1.4.1-rc2 on python-2.7b1 and got an error:
>
> (64 bit linux on core2, gcc4.4.3)
>
>
> compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core 
> -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath 
> -Inumpy/core/include -I/usr/local/include/python2.7 -c'
> gcc: _configtest.c
> _configtest.c:1: warning: conflicting types for built-in function ‘exp’
> gcc -pthread _configtest.o -o _configtest
> _configtest.o: In function `main':
> /dev/shm/numpy-1.4.1rc2/_configtest.c:6: undefined reference to `exp'
> collect2: ld returned 1 exit status
> _configtest.o: In function `main':
> /dev/shm/numpy-1.4.1rc2/_configtest.c:6: undefined reference to `exp'
> collect2: ld returned 1 exit status
> Traceback (most recent call last):
>File "setup.py", line 187, in
>  setup_package()
>File "setup.py", line 180, in setup_package
>  configuration=configuration )
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/core.py", line 186, in setup
>  return old_setup(**new_attr)
>File "/usr/local/lib/python2.7/distutils/core.py", line 152, in setup
>  dist.run_commands()
>File "/usr/local/lib/python2.7/distutils/dist.py", line 953, in 
> run_commands
>  self.run_command(cmd)
>File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command
>  cmd_obj.run()
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build.py", line 37, 
> in run
>  old_build.run(self)
>File "/usr/local/lib/python2.7/distutils/command/build.py", line 127, in 
> run
>  self.run_command(cmd_name)
>File "/usr/local/lib/python2.7/distutils/cmd.py", line 326, in run_command
>  self.distribution.run_command(command)
>File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command
>  cmd_obj.run()
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py", line 
> 152, in run
>  self.build_sources()
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py", line 
> 163, in build_sources
>  self.build_library_sources(*libname_info)
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py", line 
> 298, in build_library_sources
>  sources = self.generate_sources(sources, (lib_name, build_info))
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py", line 
> 385, in generate_sources
>  source = func(extension, build_dir)
>File "numpy/core/setup.py", line 658, in get_mathlib_info
>  mlibs = check_mathlib(config_cmd)
>File "numpy/core/setup.py", line 328, in check_mathlib
>  if config_cmd.check_func("exp", libraries=libs, decl=True, call=True):
>File "/dev/shm/numpy-1.4.1rc2/numpy/distutils/command/config.py", line 
> 310, in check_func
>  libraries, library_dirs)
>File "/usr/local/lib/python2.7/distutils/command/config.py", line 251, in 
> try_link
>  libraries, library_dirs, lang)

Looks like another distutils regression in 2.7 to me - try_link should 
never cause an error if it fails linking, that's the whole point of the 
function.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-19 Thread David
On 04/19/2010 04:45 PM, Matthieu Brucher wrote:
> Hi,
>
> I'm trying to compile scipy with ICC (numpy got through correctly),
> but I have issue with infinites in cephes:
>
> icc: scipy/special/cephes/const.c
> scipy/special/cephes/const.c(94): error: floating-point operation
> result is out of range
>double INFINITY = 1.0/0.0;  /* 99e999; */
> ^
>
> scipy/special/cephes/const.c(99): error: floating-point operation
> result is out of range
>double NAN = 1.0/0.0 - 1.0/0.0;
>^
>
> scipy/special/cephes/const.c(99): error: floating-point operation
> result is out of range
>double NAN = 1.0/0.0 - 1.0/0.0;
>  ^
>
> compilation aborted for scipy/special/cephes/const.c (code 2)
> scipy/special/cephes/const.c(94): error: floating-point operation
> result is out of range
>double INFINITY = 1.0/0.0;  /* 99e999; */
> ^
>
> scipy/special/cephes/const.c(99): error: floating-point operation
> result is out of range
>double NAN = 1.0/0.0 - 1.0/0.0;
>^
>
> scipy/special/cephes/const.c(99): error: floating-point operation
> result is out of range
>double NAN = 1.0/0.0 - 1.0/0.0;
>  ^
>
> compilation aborted for scipy/special/cephes/const.c (code 2)

All those have been fixed in scipy 0.8, and cannot be backported to 
scipy 0.7.x (because it requires the new math library as available from 
numpy 1.4.0).

I know for sure that scipy trunk + numpy 1.4.x work with ifort + MSVC on 
windows 64 (more exactly it worked in december 2009),

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy error undefined symbol: lsame_

2010-04-20 Thread David
On 04/20/2010 02:32 AM, gerardob wrote:
>
> I installed scipy (and all the required libraries) and the following error
> appears when i tried run a simple example which uses the optimize package of
> scipy. I tried also numpy alone and it works ( at least for printing
> numpy.array([10,20,10]))
>
> error:
>
> Traceback (most recent call last):
>File "main_test.py", line 2, in
>  from scipy import optimize
>File
> "/home/gberbeglia/python/Python-2.6.5/lib/python2.6/site-packages/scipy/optimize/__init__.py",
> line 11, in
>  from lbfgsb import fmin_l_bfgs_b
>File
> "/home/gberbeglia/python/Python-2.6.5/lib/python2.6/site-packages/scipy/optimize/lbfgsb.py",
> line 30, in
>  import _lbfgsb
> ImportError:
> /home/gberbeglia/python/Python-2.6.5/lib/python2.6/site-packages/scipy/optimize/_lbfgsb.so:
> undefined symbol: lsame_
> gberbeg...@actarus:~/python/mycodes>

You did not build scipy properly: you need to make sure that everything 
is built with exactly the same fortran compiler. One way to check this 
is to do ldd on the .so files which fail: if you see g2c as a 
dependency, it is using g77, if you see libgfortran, it is using gfortran.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-23 Thread David
On 04/21/2010 11:47 PM, Adrien Guillon wrote:
> Hello all,
>
> I've recently started to use NumPy to prototype some numerical
> algorithms, which will eventually find their way to a GPU (where I
> want to limit myself to single-precision operations for performance
> reasons).  I have recently switched to the use of the "single" type in
> NumPy to ensure I use single-precision floating point operations.
>
> My understanding, however, is that Intel processors may use extended
> precision for some operations anyways unless this is explicitly
> disabled, which is done with gcc via the -ffloat-store operation.
> Since I am prototyping algorithms for a different processor
> architecture, where the extended precision registers simply do not
> exist, I would really like to force NumPy to limit itself to using
> single-precision operations throughout the calculation (no extended
> precision in registers).

I don't think it is a good idea - even if you compile numpy itself with 
-ffloat-store, most runtime capabilities are built without this, so you 
will have differences wether the computation is done in the C library, 
in numpy, in the fortran runtime, or by the compiler (when computing 
constants). This sounds worse than what you can get from numpy by default,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Incomplete uninstall of 1.4.0 superpack

2010-04-28 Thread David
On 04/27/2010 01:08 AM, threexk threexk wrote:
> David Cournapeau wrote:
>  > On Mon, Apr 26, 2010 at 2:42 AM, threexk threexk
>  wrote:
>  > > Hello,
>  > >
>  > > I recently uninstalled the NumPy 1.4.0 superpack for Python 2.6 on
> Windows
>  > > 7, and afterward a dialog popped up that said 1 file or directory
> could not
>  > > be removed. Does anyone have any idea which file/directory this is? The
>  > > dialog gave no indication. Is an uninstall log with details generated
>  > > anywhere?
>  >
>  > There should be one in C:\Python*, something like numpy-*-wininst.log
>
> Looks like that log gets deleted after uninstallation (as it probably
> should), so I still could not figure out which file/directory was not
> deleted. I found that \Python26\Lib\site-packages\numpy and many
> files/directories under it have remained after uninstall. So, I tried
> reinstalling 1.4.0 and uninstalling again. This time, the uninstaller
> did not report not being able to remove files/directories, but it still
> did not delete the aforementioned numpy directory. I believe this is a
> bug with the uninstaller?

Could you maybe post the log (before uninstalling) and list the 
remaining files ?

Note though that we most likely won't be able to do much - we do not 
have much control over the generated installers,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PY_ARRAY_UNIQUE_SYMBOL is too far reaching?

2010-05-05 Thread David
On 05/04/2010 04:38 PM, Austin Bingham wrote:

>
> I admit I'm having trouble formulating questions to address my
> problems, so please bear with me.
>
> Say I've got a shared library of utilities for working with numpy
> arrays. It's intended to be used in multiple extension modules and in
> some places that are not modules at all (e.g. C++ programs that embed
> python and want to manipulate arrays directly.)
>
> One of the headers in this library (call it 'util.h') includes
> arrayobject.h because, for example, it needs NPY_TYPES in some
> template definitions. Should this 'util.h' define
> PY_ARRAY_UNIQUE_SYMBOL? Or NO_IMPORT? It seems like the correct
> answers are 'no' and 'yes', but that means that any user of this
> header needs to be very aware of header inclusion order. For example,
> if they want to include 'arrayobject.h' for their own reasons *and*
> they want NO_IMPORT undefined, then they need to be sure to include
> 'util.h' after 'arrayobject.h'.

I still don't understand why you cannot just include the header file as 
is (without defining any of NO_IMPORT/PY_ARRAY_UNIQUE_SYMBOL).

>From what I can see, the problem seems to be a conflation of two sets
> of symbols: those influenced by the PY_ARRAY_UNIQUE_SYMBOL and
> NO_IMPORT macros (broadly, the API functions), those that aren't
> (types, enums, and so forth.)

numpy headers are really messy - way too many macros, etc... Fixing it 
without breaking API compatibility is a lot of work, though,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] curious about how people would feel about moving to github

2010-05-26 Thread David
On 05/27/2010 02:16 PM, Charles R Harris wrote:
>
>
> On Wed, May 26, 2010 at 11:06 PM, Anne Archibald
> mailto:aarch...@physics.mcgill.ca>> wrote:
>
> On 27 May 2010 01:55, Matthew Brett  <mailto:matthew.br...@gmail.com>> wrote:
>  > Hi,
>  >
>  >> Linux has Linus, ipython has Fernando, nipy has... well, I'm
> sure it is
>  >> somebody. Numpy and Scipy no longer have a central figure and I
> like it that
>  >> way. There is no reason that DVCS has to inevitably lead to a
> central
>  >> authority.
>  >
>  > I think I was trying to say that the way it looks as if it will be -
>  > before you try it - is very different from the way it actually is
> when
>  > you get there.   Anne put the idea very well - but I still think
> it is
>  > very hard to understand, without trying it, just how liberating the
>  > workflow is from anxieties about central authorities and so on.
>   You
>  > can just get on with what you want to do, talk with or merge from
>  > whoever you want, and the whole development process becomes much more
>  > fluid and productive.   And I know that sounds chaotic but - it just
>  > works.  Really really well.
>
> One way to think of it is that there is no "main line" of development.
> The only time the central repository needs to pull from the others is
> when a release is being prepared. As it stands we do have a single
> release manager, though it's not necessarily the same for each
> version. So if we wanted, they could just go and pull and merge the
> repositories of everyone who's made a useful change, then release the
> results. Of course, this will be vastly easier if all those other
> people have already merged each other's results (into different
> branches if appropriate). But just like now, it's the release
> manager's decision which changes end up in the next version.
>
>
> No, at this point we don't have a release manager, we haven't since 1.2.
> We have people who do the builds and put them up on sourceforge, but
> they aren't release managers, they don't decide what is in the release
> or organise the effort. We haven't had a central figure since Travis got
> a real job ;) And now David has a real job too. I'm just pointing out
> that that projects like Linux and IPython have central figures because
> the originators are still active in the development. Let me put it this
> way, right now, who would you choose to pull the changes and release the
> official version?

Ralf is the release manager, and for deciding what goes into the 
release, we do just as we do now. For small changes which do not warrant 
discussion, they would be handled through pull requests in github at 
first, but we can improve after that (for example having an automatic 
gatekeeper which only pulls something that would at least compile and 
pass the test on a linux machine).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] curious about how people would feel about moving to github

2010-05-26 Thread David
On 05/27/2010 02:34 PM, Charles R Harris wrote:

> An automatic gatekeeper is pretty much a
> central repository, as I was suggesting.

I don't understand how centraly repository comes into this discussion - 
nobody has been arguing against it. The question is whether we would 
continue to push individual commits to it directly (push), or we should 
present branches to a gatekeeper.

I would suggest that you look on how people do it in projects using git, 
there are countless ressources on how to do it, and it has worked very 
well for pretty much every project. I can't see how numpy would be so 
different that it would require something different, especially without 
having tried it first. If the pull model really fails, then we can 
always change.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Technicalities of the SVN -> GIT transition

2010-06-01 Thread David
On 06/01/2010 06:03 PM, Pauli Virtanen wrote:

>
> Personally, I don't see problems in leaving them out.
>
>> (in maintenance/***)
>
> Why not release/** or releases/**?

Right, release is a better word.

>
> Does having a prefix here imply something to clones?

Not that I am aware of: it is just that / is allowed in branches, so 
that gives some nesting, and I think we should have naming conventions 
for branches.

>>   - Tag conversion: svn has no notion of tags, so translating them into
>>   git tags cannot be done automatically in a safely manner (and we do
>> have some rewritten tags in the svn repo). I was thinking about creating
>> a small script to create them manually afterwards for the releases, in
>> the svntags/***.
>
> Sounds OK.
>
>>   - Author conversion: according to git, there are around 50 committers
>> in numpy. Several of them are double and should be be merged I think
>> (kern vs rkern, Travis' accounts as well), but there is also the option
>> to set up real emails. Since email are "private", I don't want to just
>> "scrape" them without asking permission first. I don't know how we
>> should proceed here.
>
> I don't think correcting the email addresses in the SVN history is very
> useful. Best probably just use some dummy form, maybe

That's what svn2git already does, so that would be less work for me :) 
It may not matter much, but I think there is at least one argument for 
having real emails: to avoid having duplicate committers (i.e. pvirtanen 
is the same committer before and after the git transition). But this is 
only significant for current committers.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Technicalities of the SVN -> GIT transition

2010-06-02 Thread David
On 06/03/2010 10:24 AM, Jarrod Millman wrote:
> On Wed, Jun 2, 2010 at 10:08 AM, Matthew Brett  
> wrote:
>> Do y'all think opt-in?  Or opt-out?If it's opt-in I guess you'll
>> catch most of the current committers, and most of the others you'll
>> lose, but maybe that's good enough.
>
> I think it should be opt-in.  How would opt-out work?  Would someone
> create new accounts for all the contributors and then give them
> access?

Just to be clear, this has nothing to do with accounts on github, or any 
registered thing. This is *only* about username/email as recognized by 
git itself (as recorded in the commit objects).

Several people already answered me privately to give me their email, I 
think that's the way to go,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2 bugs related to isinf and isfinite generate crazy warnings

2010-06-02 Thread David
On 06/03/2010 10:11 AM, Eric Firing wrote:
> http://www.mail-archive.com/numpy-discussion@scipy.org/msg23912.html
>
> On some systems--but evidently not for most numpy users, or there would
> have been a steady stream of screams--the appearance of np.inf in any
> call to np.isfinite or np.isinf yields this:
>
> In [1]:import numpy as np
>
> In [2]:np.isinf(np.inf)
> Warning: invalid value encountered in isinf
> Out[2]:True
>
>
> This generates streams of warnings if np.isinf or np.isfinite is applied
> to an array with many inf values.
>
> The problem is a combination of two bugs:
>
> 1) When building with setup.py, but perhaps not with scons (which I
> haven't tried yet), NPY_HAVE_DECL_ISFINITE and friends are never
> defined, even though they should be--this is all on ubuntu 10.4, in my
> case, and isfinite and isinf most definitely are in math.h.  It looks to
> me like the only mechanism for defining these is in SConstruct.

Actually, there is a bug in setup.py to detect those for python >= 2.6, 
I have fixed this.

>
> 2) So, with no access to the built-in isinf etc., npy_math.h falls back
> on the compact but slow macros that replace the much nicer ones that
> were in numpy a year or so ago.

Which one are you thinking about: the ones using fpclassify ? Could you 
show the code where the current version is slower ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2 bugs related to isinf and isfinite generate crazy warnings

2010-06-02 Thread David
On 06/03/2010 01:02 PM, Eric Firing wrote:

>
> In any case, in MPL_isnan.h, isfinite is fast, because it involves only
> a single bitwise-and plus comparison; isnan and isinf are slower,
> because they involve about twice as much work.

Concerning speed, the problem is that it depends a lot on the 
compiler/architecture. When I timed some implementations of isnan, just 
changing from Pentium 4 to Pentium M with the exact same 
compiler/options gave mixed results. It also depends on the input - some 
methods are faster for finite numbers but slower for nan/inf, etc...

>
> Given that the present numpy isfinite macro is broken--it raises an fp
> error--there is not much point in talking about its performance.

Indeed :)

>
> What I don't know is whether there is some reason not to use the
> MPL_isnan.h macros as the backups in numpy, for platforms that do not
> have their own isnan, isfinite, and isinf.

The problem is that it is actually quite some work. Float and double are 
not so difficult (two cases to deal with for endianness), but long 
double is a huge PITA - we have already 5 different implementations to 
deal with, and we are missing some (linux PPC).

> Presumably, at that time my numpy
> build was using the system versions, and maybe they are implemented in a
> similar way.  Looking in /usr/include/bits/*.h, I gave up trying to
> figure out what the system macros really are doing.

You have to look into sysdeps/ directory inside glibc sources (on 
Linux). They use clever tricks to avoid branching, but when I 
benchmarked those, things like x != x where much faster on the computer 
I tested this on. Most likely very CPU dependent.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] "Dynamic convolution" in Numpy

2010-06-06 Thread David
On 06/07/2010 12:08 AM, Anne Archibald wrote:

>
> I think the kicker is here: what is the right way to interpolate
> between filters?

There is no right way that I know of, it really depends on what you are 
doing. The big issue here is that a filter which changes is obviously 
not time independent anymore. For IIR, this has the unfortunate 
consequence that even if your filter is stable at every interpolation 
point, transitioning from one to the other can still blow up.

For filters used in music processing, it is common to have filters 
changing really fast, for example in synthesizers (up to a few hundred 
times / sec) - and interesting effects are obtained for filters with 
very high and concentrated resonance. The solution is usually a mix of 
upsampling to avoid big transitions and using filter representations 
which are more stable (state-based representation instead of direct 
filters coefficients, for example).

I don't think it is directly applicable to your problem, but the series 
of papers by Dattorro ten years ago is a goldmine:

"Effect Design Part 1|2|3, Jon Dattorro, J. Audio Eng. Soc., Vol 45, No. 
9, 1997 September"

> As far as convolution, as David says, take a look at existing
> algorithms and maybe even music software - there's a trade-off between
> the n^2 computation of a brute-force FIR filter and the delay
> introduced by an FFT approach, but on-the-fly convolution is a
> well-studied problem.

It may be pretty cool to have an implementation of scipy, now that I 
think of it :) One issue is that there are several patents on those 
techniques, but I was told there are ways around it in term of 
implementation. Not sure how to proceed here for scipy,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Technicalities of the SVN -> GIT transition

2010-06-06 Thread David
On 06/05/2010 11:43 PM, Pauli Virtanen wrote:
> Fri, 04 Jun 2010 15:28:52 -0700, Matthew Brett wrote:
>>>> I think it should be opt-in.  How would opt-out work?  Would someone
>>>> create new accounts for all the contributors and then give them
>>>> access?
>>>
>>> Just to be clear, this has nothing to do with accounts on github, or
>>> any registered thing. This is *only* about username/email as recognized
>>> by git itself (as recorded in the commit objects).
>>
>> Actually - isn't it better if people do give you their github username /
>> email combo - assuming that's the easiest combo to work with later?
>
> I think the Github user name is not really needed here, as what goes into
> the history is the Git ID: name + email address.

Indeed. IOW, the output of

git config user.name
git config user.email

if you already use git is all that I need,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r8457 - trunk

2010-06-06 Thread David
On 06/07/2010 02:36 PM, Stéfan van der Walt wrote:
> I guess this changeset is up for discussion, but I'd be very glad if
> we could track the .gitignore.

I don't think we should. It is easy to set it up by yourself, and it may 
hide things that some people may want to see - different people may want 
to hide different things.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numscons and Python 2.7 problems

2010-06-07 Thread David
On 06/08/2010 11:37 AM, Ralf Gommers wrote:

>
> Is it really that much worse than for earlier versions? The support
> burden is probably more because of having too many Python versions at
> the same time. It's now 2.4-2.6, soon it may be 2.4-2.7 + 3.1-3.2.

I don't think scons issues should affect our policy here - it is not 
officially part of numpy. I would not be surprised if the issue were 
solved already, and if it isn't, it should be easy to fix.

2.7 is also likely to be the long-term supported version from the 2.x 
branch, so supporting it is important IMHO.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-08 Thread David
On 06/09/2010 08:04 AM, Vincent Davis wrote:
> I do have limits.h in 10.4 sdk
>
> So what next. Ay Ideas?
> I had tried to build py 3.1.2 from source but that did not work either.

I had the same issue when I tried the python 3 branch on mac os x. I 
have not found the issue yet, but I am afraid it is quite subtle, and 
possibly a bug in python 3 distutils (plus some issues in numpy.distutils).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy re-factoring project

2010-06-10 Thread David
On 06/11/2010 10:02 AM, Charles R Harris wrote:

>
>
> But for an initial refactoring it probably falls in the category of
> premature optimization. Another thing to avoid on the first go around is
> micro-optimization, as it tends to complicate the code and often doesn't
> do much for performance.

I agree it may be difficult to add this in the initial refactorization, 
but I don't think it falls into the micro optimization category.

The whole idea of converting strided buffers into temporary contiguous 
areas is already in ufunc, though. Maybe a core API to do just that 
would be useful. I am not familiar with the ufunc API at all, so I am 
not sure how it would go.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy re-factoring project

2010-06-10 Thread David
On 06/11/2010 09:27 AM, Sturla Molden wrote:

>
> Strided memory access is slow. So it often helps to make a temporary
> copy that are contiguous.

Ah, ok, I did not know this was called copy-in/copy-out, thanks for the 
explanation. I agree this would be a good direction to pursue, but maybe 
out of scope for the first refactoring,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-13 Thread David
On 06/14/2010 08:10 AM, Vincent Davis wrote:

>
> I kinda get that, I posted on the nose list ask what source/version to
> install. I installed the most recent.

They have a special branch for py3k support, I think you have to use that,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy for jython

2010-06-13 Thread David
On 06/14/2010 10:23 AM, Jarl Haggerty wrote:
> Does anyone have any interest in a port of numpy to jython?

I am insterested, and I think interest is growing thanks to the cloud 
computing fad, since a lot of this instrastructure is based on java 
(hadoop, hbase, cassandra to name a few). Being able to use numpy on top 
of those technologies would be extremely useful.

Now, I think reimplementing numpy from scratch is the wrong way if you 
want to be able to reuse existing code (scipy, etc...). There is a 
starting effort to refactor numpy to make it easier to port on new VM, 
and JVM would be an obvious candidate to test those things.

cheers,

David

P.S: Nitpick: please avoid sending rar archives, only a few people can 
read them easily. Use zip or tarballs
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Technicalities of the SVN -> GIT transition

2010-06-14 Thread David
On 06/15/2010 12:15 PM, Vincent Davis wrote:
> Is this http://github.com/cournape/numpy going to become the git repo?

No, this is just a mirror of the svn repo, and used by various 
developers to do their own stuff.

> I guess I am wondering what the progress is of the transition?

It is progressing, and we will keep people posted when there is 
something to show,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py with 4.1 under snow leopard

2010-06-16 Thread David
On 06/17/2010 03:05 AM, Charles سمير Doutriaux wrote:
> Hi Robert,
>
> You're right. I finally figured out which flag was missing I'm posting it 
> here as a reference for others:
> I needed to add:
> -undefined dynamic_lookup
> I also removed -fPIC
>
> Hope this helps somebody else.
>
> Do you think it would be possible to have LDFLAGS added to the default ones? 
> Or is there a way to obtain this by setting something else (EXTRA_LDFLAGS) or 
> something like that?

You can use numscons, which has this behavior and makes it easier to 
tweak flags,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy: could not read symbols Bad value error

2010-06-20 Thread David
On 06/21/2010 06:30 AM, Michael Green wrote:
> On Sun, Jun 20, 2010 at 1:19 AM, Charles R Harris
>   wrote:
>>
>> Have you checked the actual configuration that was used to compile lapack,
>> *not* the command line flags. I never had with success with the automatic
>> builds using the compressed files. Also check the actual installed libraries
>>
>> 
>
> Yeah, the make.inc does have the -fPIC option:
>
> FORTRAN  = gfortran
> OPTS = -O2 -fPIC
> DRVOPTS  = $(OPTS)
> NOOPT= -O0 -fPIC
> LOADER   = gfortran
> LOADOPTS =
>
> And of course I see that -fPIC is actually being used:
> ...
> gfortran  -O2 -fPIC -c zsysvx.f -o zsysvx.o
> gfortran  -O2 -fPIC -c zsytf2.f -o zsytf2.o
> gfortran  -O2 -fPIC -c zsytrf.f -o zsytrf.o

Can you check it is actually used for the file which causes the link 
failure (in atlas) ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bool indices segv

2010-06-22 Thread David
On 06/23/2010 09:38 AM, Geoffrey Ely wrote:
>
> Not sure if Python itself is available through PyPI/distribute. I installed 
> Python 2.6.5 from source.
>
> As I understand it, setuptools does not work well for Numpy install, but 
> distribute is a bit better. Is that true?

No, it is even worse.

> Is it better to avoid setuptools/distribute/PyPI altogether?

Yes, unless you need their features (which in the case of numpy is 
mostly egg, since installing from pypi rarely works anyway).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-01 Thread David
Hi,

I am pleased to announce the release 0.0.3 for Bento, the pythonic 
packaging solution.

Wherease the 0.0.2 release was mostly about getting the 
simplest-still-useful subset of distutils features, this new release 
adds quite a few significant features:

 - Add hooks to customize arbitrary stages in bento (there is a 
hackish example which shows how to use waf to build a simple C 
extension). The API for this is still in flux, though
 - Parallel and reliable build of C extensions through yaku build
   library.
 - One file distribution: no need for your users to install any new
   packages, just include one single file into your package to
   build with bento
 - Improved documentation
 - 2.4 -> 2.7 support, tested on linux/windows/mac os x

You can download bento on github: http://github.com/cournape/Bento

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Bento (ex-toydist) 0.0.3

2010-07-02 Thread David
On 07/02/2010 05:05 PM, Dag Sverre Seljebotn wrote:
> David Cournapeau wrote:
>> On Fri, Jul 2, 2010 at 1:56 PM, Robert Pyle  wrote:
>>
>>> Hi,
>>>
>>> While I agree that toydist needs a new name, Bento might not be a good
>>> choice.  It's already the name of a database system for Macintosh from
>>> Filemaker, an Apple subsidiary.  I'd be *very* surprised if the name
>>> Bento is not copyrighted.
>>>
>>
>> Can you copyright a word ? I thought this was the trademark part of
>> the law. For example, "linux" is a trademark owned by Linus Torvald.
>> Also, well known packages use words which are at least as common as
>> bento in English (sphinx, twisted, etc...), and as likely to be
>> trademarked. But IANAL...
>>
> There's been lots of discussions about this on the Sage list, since
> there's lots of software called Sage. It seems that the consensus of
> IANAL advice on that list is that as long as they're not competing in
> the same market they're OK. For instance, there's been some talk about
> whether it's OK to include economics utilities in Sage since there's an
> accounting software (?) called Sage -- that sort of thing.

Thanks. that's useful to know.

> Thanks for your work David, I'll make sure to check it out soon!

Note that cython setup.py can be automatically converted - there is a 
small issue with the setup docstring which contains rest syntax 
incompatible with bento.info format (when empty lines has a different 
amount of space than the current indentation). But once you manually 
edit those, you can build egg, windows installer and install cython. In 
particular, the cython script is "exucutablified" like setuptools does, 
so cython is a bit more practical to use on windows.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] First shot at svn->git conversion

2010-07-26 Thread David
On 07/26/2010 05:44 PM, Pauli Virtanen wrote:
> Mon, 26 Jul 2010 13:57:36 +0900, David Cournapeau wrote:
>> I have finally prepared and uploaded a test repository containing numpy
>> code:
>>
>> http://github.com/numpy/numpy_svn
>
> Some observations based on a quick look:
>
> 1)
>
> $ git branch -r
>origin/maintenance/1.1.x_5227
>origin/maintenance/1.5.x/doc/numpybook/comparison/pyrex
>origin/maintenance/1.5.x/doc/pyrex
>origin/svntags/0.9.6_2236
>
> These don't look right?

The 1.5.* are weird, especially the doc/* ones, I am not sure what the 
issue is. The weird one in svntags is because the tag has been 
"recreated". We could just remove them a posteriori, but it bothers me a 
bit not to know where they are coming from.

>
> 2)
>
> Would it be possible to get real Git tags for the tags in SVN?

Yes, but the convertor manual advised against it, because svn has no 
real notion of tags: the git tag and and the svn tag may not correpond 
to the same tree, and it seems there are a lot of funky corner cases 
(unlikely to happen in numpy, though).

One could just put the release tags, with a manual check ?

> How much do we lose if we just drop the
> svntags/* stuff (it clutters up the branch -r output)?

We don't lose much if we put the tags as git tags.

> 3)
>
> Some people have their SVN commit account in format forename.surname --
> it might be OK to translate this to "Forename Surname" even if we do not
> hear back from them?
>
>   alan.mcintyre
>   chris.barker
>   chris.burns
>   christoph.weidemann
>   darren.dale
>   paul.ivanov
>   tim_hochberg

Sure


> Also, Ralf has the Git ID in format "rgommers<...email...>", but I guess
> that's correct?
>
> For other people (eg Pierre GM, Michael Droettboom) who might still be
> active contributors in the future, it could be nice to know what Git ID
> they are going to use.

Indeed. I would confess that's one of the main reason to publish the 
temporary repository, to make the transition that much more real :)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] First shot at svn->git conversion

2010-07-28 Thread David
On 07/28/2010 05:45 PM, Pauli Virtanen wrote:
> Wed, 28 Jul 2010 12:17:27 +0900, David Cournapeau wrote:
> [clip]
>>>>> http://github.com/numpy/numpy_svn
>>
>> I put a new repostory (same location)
>
> Some more notes:
>
> - 1.1.x branch is missing.
>
>This is maybe because in SVN something ugly was done with this branch?

No, that's because I made a mistake the rule file...

>
> - Something is still funny with some of the tags
>
>$ (for tag in `git tag`; do V=`git log --oneline $tag \
>   |head -n200|wc -l`; echo "$V $tag"; done)| sort -n
>2 v1.1.0
>2 v1.1.1
>2 v1.1.1rc1
>2 v1.1.1rc2
>3 v1.0b1
>200 v0_2_0
>200 v0_2_2
>...

That's exactly the problem with converting svn "tags" to real tags - 
there is no reason that the tag corresponds to something in a branch 
with the way svn works. But here the issue is a mistake one my end. We 
have a small enough number of tags so that we can check them by hand here.

I am updating the repo as we speak,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4.1 fails to build on (Debian) alpha and powepc

2010-07-29 Thread David
On 07/30/2010 06:47 AM, Sandro Tosi wrote:

> For the build logs it's easy:
>
> alpha: 
> https://buildd.debian.org/fetch.cgi?pkg=python-numpy&arch=alpha&ver=1%3A1.4.1-4&stamp=1280296333&file=log&as=raw
> powerpc: 
> https://buildd.debian.org/fetch.cgi?pkg=python-numpy&arch=powerpc&ver=1%3A1.4.1-4&stamp=1280297029&file=log&as=raw
>
> for powerpc "import numpy; numpy.test()" I've already sent you the
> output, want me to re-run them? for alpha, attached you can find the
> log for both 2.5 and 2.6; there are some errors/warnings but nothing
> too dramatic?

Wow, I am genuily surprised that the alpha test suite has no error (the 
2.5 error has nothing to do with the fixes).

>
>> Also, if there is another issue preventing numpy 1.4.x integration on
>> debian and ubuntu, please speak up. Ideally, I would like to remove
>
> I don't think there is anything else (for now :) ) from the numpy
> side: Thanks a lot for the support!! Now on Debian we have to fix some
> packages to avoid breakages when we upgrade numpy in the future (the
> biggest issue was that dtype was extended with new fields at the end,
> but some packages were checking the size of dtype with the one the
> packge was compiled with (1.3.*) and failed).

Yes, we have improved quite a bit our support here in 1.4.x - we hope 
that those issues won't arise in the 1.x series anymore. Note also that 
if those dtype errors appear with pyrex/cython-generated code, using a 
more recent cython will prevent the error from happening (warnings 
raised instead).

> As usual, Ubuntu will
> just sit and wait for us to do the work and then just sync it (sigh).
>
>> the need for downstream patches (none of them was necessary IIRC),
>
> Here is the list of the patches we currently have in the Debian
> package (you can look at them at [2]):
>
> 02_build_dotblas.patch
> - Patch to build _dotblas.c when ATLAS is not installed.
> -- dunno exactly what it does, it seems to infer _dotblas is compiled
> is ATLAS is missing

This is is caused by not having atlas as a build dependency I guess. 
Strictly speaking, dotblas only requires cblas, but we don't have the 
check in place to do so. Since numscons already does this, and it has 
worked pretty well, maybe I will take time to add this as well in 
numpy.distutils. But this has relatively little consequence I think.

>
> 03_force_f2py_version.patch
> - force generation f2py postfixed with interpreter version
> -- Debian specific: we ship f2py2.5 and f2py2.6 and we make f2py a
> symlink towards f2py2.6

ok.


> 05_fix_endianness_detection.patch
> - Fix endianness detection: endian.h should be present on all Debian
> machines. This patch forces the use of endian.h, this reventing
> several reverse dependencies os Numpy from failing to build.
> -- Debian specific: we want to enforce the usage of endian.h file
> available on all of our architectures

This one has been fixed in the 1.4.x branch (and trunk of course)

>
> 07_bts585309_string_exceptions.diff
> - Remove string exceptions
> -- patch from trunk, we can remove it once a new release is out

This one as well

> In case of any doubts or so, don't hesitate to contact me: I'd be more
> than happy to give all the help I can.

great, thanks,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] floating point arithmetic issue

2010-07-30 Thread David
Hi Guillaume,

On 07/30/2010 07:45 PM, Guillaume Chérel wrote:
>Hello,
>
> I ran into a difficulty with floating point arithmetic in python.

This is not python specific, but true in any language which uses the 
floating point unit of your hardware, assuming you have "conventional" 
hardware.

> Namely
> that:
>
>   >>>  0.001 + 1 - 1
> 0.00088987
>
> And, as a consequence, in python:
>
>   >>>  0.001 + 1 - 1 == 0.001
> False
>
> In more details, my problem is that I have a fonction which needs to
> compute (a + b - c) % a. And for b == c, you would expect the result to
> be 0 whatever the value of a. But it isn't...

Indeed, it is not, and that's expected. There are various pitfalls using 
floating point. Rational and explanations:

http://docs.sun.com/source/806-3568/ncg_goldberg.html

> Is there any way to solve this?

Using % with float is generally a wrong idea IMO. You have not said what 
you are trying to do. The solution may be to use integers, or to do it 
differently, but we need more info,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] distutils issue - python 3.1 on windows

2010-08-02 Thread David
On 08/03/2010 02:57 AM, Pauli Virtanen wrote:
> Mon, 02 Aug 2010 12:52:12 -0500, Robert Kern wrote:
> [clip]
>> I believe we avoided the inspect module because it is quite expensive to
>> import. It may not matter inside numpy.distutils, but be wary of
>> "fixing" things to use inspect elsewhere. It would be worth extracting
>> the commonly-used pieces of inspect (and hacks like this) into an
>> internal utility module that is fast to import.
>
> We actually have `numpy.compat._inspect` and
>
>   from numpy.compat import getargspec
>
> that could be used here.
>

Yep, it was added precisely for avoiding the slow import of upstream 
inspect,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bento question: Fortran 90 & Cython support

2010-08-12 Thread David
Hi Kurt,

On 08/13/2010 03:40 AM, Kurt Smith wrote:
> Hi,
>
> I'm very interested in Bento and think it will be a killer project.
>
> My question: do you anticipate supporting Fortran 90 within Bento, or
> will that be delegated to an external build tool?

Bento does not support building at all. The idea is that you will be 
able to plug your build tool of choice if desired, and bento will just 
gives you a very high level description of your extensions.

Right now, I am working on yaku, which is a simple build tool designed 
as a library, and would be used by bento by default. If I can build 
scipy with it and keep it simple, I will integrate yaku and bento. This 
means at least basic fortran spuport. Otherwise, I may just give up yaku 
- if it ends up being as big as say waf, there is really no point in it.

I was not using waf directly because previous to 1.6, using waf as a 
library was not as easy as expected (much better than scons, though), 
but this is changing in 1.6. This means I will also need to add fortran 
support to waf. I really like waf but was a bit concerned with the lack 
of usage - now that samba is working on it, the prospect of long term 
support for waf look much better (
http://wiki.samba.org/index.php/Waf).

>  There are some
> intricacies with Fortran 90 that make it difficult to use with the
> usual configure-then-build ordering, specifically when the configure
> step depends on .mod files that don't exist until after compilation.

Could you expand on this ? Nothing prevents the configure step from 
building some mod files necessary for configuration a priori.

>
> Also, what about projects with Pyrex/Cython sources: will Bento
> internally support .pyx files?

You can already build extensions with cython with bento+yaku. You just 
add .pyx files as sources in the Extension section of the bento.info, 
and it is then up to the build tool to deal with .pyx.

You can see an example in the port of nipy build to bento, which has 
cython files:

http://github.com/cournape/nipy/tree/bento_build

Note that even though the bento.info format itself is pretty stable, 
anything that goes in bscript files (to customize build) keeps changing 
and is highly unstable. I still have no clear idea about the API 
(scipy's build idiosyncraties keep breaking the assumptions I have made 
so far :) ).

To somewhat deal with the unstability API-wise, you can include a copy 
of bento+yaku in your project, as I have done in nipy. It is a 
self-contained file which is about 350 kb (and down to 80 kb if you 
don't care about supporting building windows installers).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-16 Thread David
On 08/17/2010 01:58 PM, ashf...@whisperpc.com wrote:
> I'm having problems getting the GotoBLAS library (Nehalem optimized BLAS -
> "http://www.tacc.utexas.edu/tacc-projects/gotoblas2/";) working properly under
> the Python NumPy package ("http://numpy.scipy.org/";) on a quad-core Nehalem
> under FC10.
>
> The command used to build the library is:
>  make BINARY=64 USE_THREAD=1 MAX_CPU_NUMBER=4
>
> I'm limiting this to four cores, as I believe HyperThreading will slow it down
> (I've seen this happen with other scientific code).  I'll benchmark later to
> see whether or not HyperThreading helps.
>
> I built the library (it uses -fPIC), then installed it under /usr/local/lib64,
> and created the appropriate links:
>  # cp libgoto2_nehalemp-r1.13.a /usr/local/lib64
>  # cp libgoto2_nehalemp-r1.13.so /usr/local/lib64
>  # cd /usr/local/lib64
>  # ln -s libgoto2_nehalemp-r1.13.a libgoto2.a
>  # ln -s libgoto2_nehalemp-r1.13.so libgoto2.so
>  # ln -s libgoto2_nehalemp-r1.13.a libblas.a
>  # ln -s libgoto2_nehalemp-r1.13.so libblas.so

The .so are only used when linking, and not the ones used at runtime 
generally (the full version, e.g. .so.1.2.3 is). Which version exactly 
depends on your installation, but I actually advise you against doing 
those softlink. You should instead specificaly link the GOTO library to 
numpy, by customizing the site.cfg,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-17 Thread David
On 08/17/2010 08:43 PM, Eloi Gaudry wrote:
> Peter,
>
> please below a script that will build numpy using a relevant site.cfg for 
> your configuration (you need to update GOTODIR and LAPACKDIR and PYTHONDIR):
>
> #!/bin/sh
>
> #BLAS/LAPACK configuration file
> echo "[blas]">   ./site.cfg
> echo "library_dirs = GOTODIR">>  ./site.cfg
> echo "blas_libs = goto2_nehalemp-r1.13">>  ./site.cfg
> echo "[lapack]">>  ./site.cfg
> echo "library_dirs = LAPACKDIR">>  ./site.cfg
> echo "lapack_libs = lapack">>  ./site.cfg
>
> #compilation variables
> export CC=gcc
> export F77=gfortran
> export F90=gfortran
> export F95=gfortran
> export LDFLAGS="-shared -Wl,-rpath=\'\$ORIGIN/../../../..\'"
> export CFLAGS="-O1 -pthread"
> export FFLAGS="-O2"
>
> #build
> python setup.py config
> python setup.py build
> python setup.py install
>
> #copy site.cfg
> cp ./site.cfg PYTHONDIR/lib/python2.6/site-packages/numpy/distutils/.

This should be PYTHONPATH, not PYTHONDIR. Also, on 64 bits, you need 
-fPIC in every *FLAGS variables.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-17 Thread David
On 08/18/2010 07:39 AM, ashf...@whisperpc.com wrote:
> Eloi,
>
>> please below a script that will build numpy using a relevant site.cfg for 
>> your
>> configuration (you need to update GOTODIR and LAPACKDIR and PYTHONDIR):
>
>> #copy site.cfg
>> cp ./site.cfg PYTHONDIR/lib/python2.6/site-packages/numpy/distutils/.
>
> I believe this needs a $ prior to PYTHONDIR.
>
> I tried this, and the benchmark still came in at 8.5S.
>
> Any ideas?  How long should the following benchmark take on a Core-i7 930,
> with Atlas or Goto?
>
> time python -c "import numpy as N; a=N.random.randn(1000, 1000); N.dot(a, a)"

Do you have a _dotblas.so file ? We only support _dotblas linked against 
atlas AFAIK, which means goto won't be used in that case. To check which 
libraries are used by your extensions, you should use ldd on the .so 
files (for example ldd .../numpy/linalg/lapack_lite.so).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.5.0 beta 2

2010-08-17 Thread David
On 08/18/2010 01:56 PM, Charles R Harris wrote:
>
>
> On Tue, Aug 17, 2010 at 9:11 PM, Christoph Gohlke  <mailto:cgoh...@uci.edu>> wrote:
>
>
>
> On 8/17/2010 1:02 PM, Charles R Harris wrote:
>  >
>  >
>  > On Tue, Aug 17, 2010 at 1:38 PM, Christoph Gohlke
> mailto:cgoh...@uci.edu>
>  > <mailto:cgoh...@uci.edu <mailto:cgoh...@uci.edu>>> wrote:
>  >
>  >
>  >
>  > On 8/17/2010 8:23 AM, Ralf Gommers wrote:
>  >
>  > I am pleased to announce the availability of the second
> beta of
>  > NumPy
>  > 1.5.0. This will be the first NumPy release to include
> support for
>  > Python 3, as well as for Python 2.7.
>  >
>  > Please try this beta and report any problems on the NumPy
>  > mailing list.
>  > Especially with Python 3 testing will be very useful. On
> Linux
>  > and OS X
>  > building from source should be straightforward, for
> Windows a binary
>  > installer is provided. There is one important known issue
> on Windows
>  > left, in fromfile and tofile (ticket 1583).
>  >
>  > Binaries, sources and release notes can be found at
>  > https://sourceforge.net/projects/numpy/files/
>  > <https://sourceforge.net/projects/numpy/files/>
>  >
>  > Enjoy,
>  > Ralf
>  >
>  >
>  > NumPy 1.5.0 beta 2 built with msvc9/mkl for Python 2.7 and
> 3.1 (32
>  > and 64 bit) still reports many (> 200) warnings and three
> known test
>  > failures/errors. Nothing serious, but it would be nice to
> clean up
>  > before the final release.
>  >
>  > The warnings are of the type "Warning: invalid value
> encountered in"
>  > for the functions reduce, fmax, fmin, logaddexp, maximum,
> greater,
>  > less_equal, greater_equal, absolute, and others. I do not see
> any of
>  > these warnings in the msvc9 builds of numpy 1.4.1.
>  >
>  >
>  > The warnings were accidentally turned off for earlier versions of
> Numpy.
>  > I expect these warnings are related to nans and probably due to
> problems
>  > with isnan or some such. Can you take a closer look? The fmax
> function
>  > should be easy to check out.
>  >
>  > 
>  >
>  > Chuck
>  >
>
>
> Thanks for the hint. Warnings are issued in the test_umath test_*nan*
> functions. The problem can be condensed to this statement:
>
>  >>> numpy.array([numpy.nan]) > 0
> Warning: invalid value encountered in greater
> array([False], dtype=bool)
>
>
> When using msvc, ordered comparisons involving NaN raise an exception
> [1], i.e. set the 'invalid' x87 status bit, which leads to the warning
> being printed. I don't know if this violates IEEE 754 or C99 standards
> but it does not happen with the gcc builds. Maybe
> seterr(invalid='ignore') could be added to the test_*nan* functions?
>
> [1] http://msdn.microsoft.com/en-us/library/e7s85ffb%28v=VS.90%29.aspx
>
>
> OK, this does seem to be the standard. For instance
>
> The isless macro determines whether its first argument is less than its
> second
> argument. The value of isless(x, y) is always equal to (x) < (y); however,
> unlike (x) < (y), isless(x, y) does not raise the ‘‘invalid’’ floating-point
> exception when x and y are unordered.

Yes, it is - but I cannot reproduce the INVALID FPU exception on Linux 
when using e.g. int a = (Nan > 1). I don't know what's up with that, as 
the glibc says it should,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy installation problems

2010-08-22 Thread David
On 08/23/2010 07:59 AM, martin djokovic wrote:
> Do you think I should reinstall gcc?

You most likely already have gcc, there is no need to reinstall it. The 
missing symbol refers to the math library from icc, so if you can find 
the missing libraries, you should be set:

1: get the name of the missing library by running ldd on the .so files 
which have missing symbols (fftpack_lite.so here). You should get a list 
of libraries, with some missing.
2: locate the missing libraries on your system.
3: if found, add the path of the libraries into LD_LIBRARY_PATH, e.g. 
export LD_LIBRARY_PATH=/opt/lib:$LD_LIBRARY_PATH inside your shell
4: if not found, then you may need to reinstall numpy from sources.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] inversion of large matrices

2010-08-30 Thread David
On 08/31/2010 11:19 AM, Dan Elliott wrote:
> Thanks for the reply.
>
> David Warde-Farley  cs.toronto.edu>  writes:
>> On 2010-08-30, at 11:28 AM, Daniel Elliott wrote:
>>> Large matrices (e.g. 10K x 10K)
>>
>>> Is there a function for performing the inverse or even the pdf of a
>>> multinomial normal in these situations as well?
>>
>> There's a function for the inverse, but you almost never want to use it,
> especially if your goal is the
>> multivariate normal density. A basic explanation of why is available here:
>> http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
>>
>> In the case of the multivariate normal density the covariance is assumed to 
>> be
> positive definite, and thus a
>> Cholesky decomposition is appropriate. scipy.linalg.solve() (NOT
> numpy.linalg.solve()) with the
>> sym_pos=True argument will do this for you.
>
> You don't think this will choke on a large (e.g. 10K x 10K) covariance matrix?

It will work if you have enough memory. I have worked with slightly 
bigger matrices, but I have 12 Gb on my machine. You need some patience, 
though :)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't install numpy

2010-09-12 Thread David
On 09/13/2010 01:45 AM, Felix Stödter wrote:
> Hi numpy-team,
>
> unfortunately I can't install numpy. When I try to excecute the exe-file I 
> get a message like "Pyhton 2.6 required". Pyhton is installed within another 
> programme (Marc/Mentat). Can't he find it because of that? I know that it is 
> possible to install numpy with this Python in Marc/Mentat. But how? Do you 
> know what I can do about it?

What are you trying to do ?

If Marc Mentat is using numpy internally (I don't know anything about 
that product), and you want to update numpy there, I strongly advise you 
*not* to install anything there unless they explicitely support it.

If you just want to install numpy to use straight numpy, you should 
install it on top on python installed outside marc mentat,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can we freeze the subversion repository and move to github this week?

2010-09-15 Thread David
On 09/15/2010 04:21 PM, Gael Varoquaux wrote:
> On Tue, Sep 14, 2010 at 04:51:39PM -0700, Fernando Perez wrote:
>>> I've heard several people say that once they used git, they can't imagine 
>>> going back to SVN.
>
>> As you were writing this, Min RK and I were discussing on IRC:
>
>>   there are so many people who provide patches on github, since
>> the process is so easy
>>   I couldn't think of going back from git to svn now.
>
> While I find that git has its limitations (I have never made more user
> mistakes with data-loss consequences using git than with another VCS)

It is very difficult to actually lose data with git thanks to the 
reflog: 
http://www.gitready.com/intermediate/2009/02/09/reflog-your-safety-net.html

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can we freeze the subversion repository and move to github this week?

2010-09-15 Thread David
On 09/15/2010 05:58 PM, Gael Varoquaux wrote:
> On Wed, Sep 15, 2010 at 05:10:53PM +0900, David wrote:
>> It is very difficult to actually lose data with git thanks to the
>> reflog:
>> http://www.gitready.com/intermediate/2009/02/09/reflog-your-safety-net.html
>
> Unless you use the 'force' switches. I am trying very hard not to
> use them, as I have been advised by several good git users.

Well, yes and no. For example, you can not loose commits with any 
command that I know of thanks to the reflog (and this is true even if 
you use really "hard" commands like git reset --hard somecommit).

>
> But I keep getting in situations where people tell me that I need to use
> one of these switches. It's always a bit hard to explain how I get there,
> but I'll try and do so, so that knowledgeable people can advice me on the
> right solutions.
>
> Here is an example (in chronological order):
>
>   1) Branch out a feature branch 'feature' from master.
>
>   2) Develop in feature branch (cycle of code/commit...)
>
>   3) Hot bug on master, checkout master, attempt to fix bug. Bug fix
>  induces other bugs, cycle of code/commit to fix them.
>
>   4) Decide that bug fix is not mature enough to push, but feature branch
>  got reviewed and is.
>
>   5) Discover that I can't push from feature to origin/master. Conclude
>  that I must merge back in master locally.
>
> Now I have a problem: at step 1 I should have created a branch. I did
> not. I need to go back and create a branch. This was happening at a
> sprint, and people that know git better than me helped me out. But the
> only way we found to sort this out was to create a branch at step 1,
> merge the branch with master, and 'reset -a' master at step 1. I thought
> it over quite a few times, and did not loose any data. However, I was
> very uncomfortable with the process (the 'reset -a').

I am not sure I understand your issue exactly. Do you mean you put some 
commits in the wrong branch ? I don't see how reset is related to that - 
I mean, I have used git for two years, and I don't even know what reset 
-a does, much less used it :)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy SVN frozen; move to Git

2010-09-16 Thread David
On 09/16/2010 10:30 PM, Pauli Virtanen wrote:
> Thu, 16 Sep 2010 08:58:46 +, Pauli Virtanen wrote:
>> The next things on the TODO list:
>>
>>- Update any links that point to http://svn.scipy.org/svn/numpy
>>  or talk about SVN.
>>
>>  E.g. numpy.org needs updating.
>>
>>- Put up documentation on how to contribute to Numpy via Git.
>>  Gitwash-generated stuff could be added to the Numpy docs.
>>
>>- Decide if the `numpy-svn` email list is still needed.
>>  Github has RSS feeds for the repositories (but it can also send
>>  email to the list, if we want to keep the list alive).
>>
>>- Core devs: create accounts on github.com and ask for push
>>permissions
>>  to the numpy repository.
>>
>>  Or, just push your changes to your personal forks, and send pull
>>  requests -- I'm sure we have enough people to handle it also this
>>  way.
>>
>>- Trac integration -- our bug DB will still stay at projects.scipy.org
>>  but other Trac functionality can maybe be integrated.
>
> And a few more:
>
>- Buildbot and buildslaves. Maybe easiest by using Github's SVN
>  interface?
>
>  http://github.com/blog/626-announcing-svn-support

That's what I was thinking too - but it seems that the svn support is 
somewhat flaky. I keep getting svn: REPORT of 
'/cournape/numpy.git/!svn/vcc/default': 200 OK (http://svn.github.com). 
May be due to some network configuration, I don't know. Would be good if 
someone else could check,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils

2010-10-11 Thread David
On 10/12/2010 03:39 AM, Charles Doutriaux wrote:
>   Hi David,
>
> The behaviour is there in regular distutils, it is apparently a known
> bug, I'm copy/pasting their answer in there for information.

I saw the discussion, thanks for the update.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Commit rights on github

2010-10-11 Thread David
On 10/12/2010 08:48 AM, Charles R Harris wrote:
>
>
> On Mon, Oct 11, 2010 at 5:09 PM, Pierre GM  <mailto:pgmdevl...@gmail.com>> wrote:
>
>
> On Oct 12, 2010, at 1:06 AM, Pauli Virtanen wrote:
>
>  > Mon, 11 Oct 2010 23:30:31 +0200, Pierre GM wrote:
>  >> Would any of you mind giving me commit rights on github? My
> handle is
>  >> pierregm. Thanks a million in advance.
>  >
>  > Granted.
>
> Got it, thanks again!
>
>
> Umm, I think your first commit changed a lot more than you intended.

Indeed. Pierre, please revert this commit, and then commit what you 
intended:

git revert a14dd542532d383610c1b01c5698b137dd058fea -m 2 # will revert 
all your changes
git cherry-pick -n 61d945bdb5c9b2b3329e1b8468b5c7d0596dd9fc # apply the 
changes introduced by 61d945..., but do not commit

Then, check that you don't add unnecessary files (eclipse files) before 
committing again. A good way to check what you are about to commit is to 
do git diff --stat --cached,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Commit rights on github

2010-10-11 Thread David
On 10/12/2010 11:12 AM, Charles R Harris wrote:
>
>
> On Mon, Oct 11, 2010 at 7:31 PM, David  <mailto:da...@silveregg.co.jp>> wrote:
>
> On 10/12/2010 08:48 AM, Charles R Harris wrote:
>  >
>  >
>  > On Mon, Oct 11, 2010 at 5:09 PM, Pierre GM  <mailto:pgmdevl...@gmail.com>
>  > <mailto:pgmdevl...@gmail.com <mailto:pgmdevl...@gmail.com>>> wrote:
>  >
>  >
>  > On Oct 12, 2010, at 1:06 AM, Pauli Virtanen wrote:
>  >
>  > > Mon, 11 Oct 2010 23:30:31 +0200, Pierre GM wrote:
>  > >> Would any of you mind giving me commit rights on github? My
>  > handle is
>  > >> pierregm. Thanks a million in advance.
>  > >
>  > > Granted.
>  >
>  > Got it, thanks again!
>  >
>  >
>  > Umm, I think your first commit changed a lot more than you intended.
>
> Indeed. Pierre, please revert this commit, and then commit what you
> intended:
>
> git revert   # will revert
> all your changes
> git cherry-pick -n 61d945bdb5c9b2b3329e1b8468b5c7d0596dd9fc # apply the
> changes introduced by 61d945..., but do not commit
>
> Then, check that you don't add unnecessary files (eclipse files) before
> committing again. A good way to check what you are about to commit is to
> do git diff --stat --cached,
>
>
> I can't quite figure out what commit a14dd542532d383610c1
> <http://github.com/numpy/numpy/commit/a14dd542532d383610c1b01c5698b137dd058fea>
> did or where it applied. Any hints?

It is a merge commit:

"""
commit a14dd542532d383610c1b01c5698b137dd058fea
Merge: 61d945b 11ee694
Author: pierregm 
Date:   Mon Oct 11 23:02:10 2010 +0200

 merging refs/remotes/origin/master into HEAD
"""

You can see that it has two parents (61d945b and 11ee694). 61d945b is 
another commit by PGM, whereas 11ee694 refers to the last "clean" commit 
by Pauli. For example, you can see this using the --graph option of git log:

*   commit a14dd542532d383610c1b01c5698b137dd058fea
|\  Merge: 61d945b 11ee694
| | Author: pierregm 
| | Date:   Mon Oct 11 23:02:10 2010 +0200
| |
| | merging refs/remotes/origin/master into HEAD
| |
| * commit 11ee694744f2552d77652ed929fdc2b4ccca6843
| | Author: Pauli Virtanen 
| | Date:   Mon Oct 11 00:40:13 2010 +0200
| |

...
| * commit 4510c4a81185eed7e144f75ec5121f80bc924a6e
| | Author: Pauli Virtanen 
| | Date:   Fri Oct 1 11:15:38 2010 +0200
| |
| | sphinxext: fix Other Parameters section parsing in docscrape
| |
* | commit 61d945bdb5c9b2b3329e1b8468b5c7d0596dd9fc
|/  Author: pierregm 
|   Date:   Mon Oct 11 22:24:26 2010 +0200
|
|   Add more tests to test_eq_w_None (bug #1493)


So by doing:

git revert a14dd542532d383610c1b01c5698b137dd058fea -m 2

You are telling git to revert to its second parent (here 11ee6947). I 
don't know exactly what PGM did, maybe he mistakenly "unmerge" all the 
changes made from 61d945 and commit this.

In that case, a way to avoid this mistake is to do the following:

   * make changes on your own git repository clone
   * once you have a set of changesets of commits, first update your 
branch from upstream (a bit like svn up) with git pull: at that point, 
git will try to merge your changes and upstream changes if there are 
divergences. You could also use the --rebase option of git pull, with 
the usual rebase caveats w.r.t. changes

See here for a relatively clear explanation: 
http://yehudakatz.com/2010/05/13/common-git-workflows/, "Updating from 
the remote" section,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Development workflow

2010-10-12 Thread David
On 10/12/2010 06:22 PM, Matthew Brett wrote:
> Hi,
>
>> I think there are two issues here:
>>
>> (A) How to be sensible and presentable
>>
>> (B) How and when your stuff gets into master
>
> A very useful distinction - thanks for making it.
>
>> For (A) I'm following the same workflow I had with the git mirror:
>>
>> 1. For *every* change, create a separate topic branch.
>>
>> 2. Work on it until the feature/bugfix is ready.
>>
>> 3. Push it to my own github clone for review/backup purposes if necessary.
>>
>> 4. If necessary, rebase (not merge!) on master when developing
>>to keep in stride.
>>
>> 5. When ready, (i) rebase on master, (ii) check that the result is
>>sensible, and (iii) push from the topic branch as new master.
>>
>> In this case, since all recent changes are just unrelated stand-alone
>> bugfixes, this produces something that looks very much like SVN log :)
>>
>> I think of the above, 1-4 are okay in all cases. 5 is then perhaps not so
>> absolute, as one could also do a merge if there are several commits. I
>> 100% endorse Fernando's recommendations:
>>
>> http://mail.scipy.org/pipermail/ipython-dev/2010-October/006746.html
>>
>> This really sounds like best-practice to me, and it's even empirically
>> tested!
>
> OK - so it seems to me that you agree with Fernando's recommendations,
> and that's basically the same as what Stefan was proposing (give or
> take a rebase), and David agreed with Stefan.   So - really - everyone
> agrees on the following - work on topic branches - don't merge from
> trunk - rebase on trunk if necessary.  I think _insisting_ on rebase
> on trunk before merge with trunk is a little extreme (see follow-up to
> ipython thread) - but it's not a big deal.
>
>> Then there's the second question (B) on when core devs should push
>> changes. When ready, when reviewed, or only before release?
>>
>> I would be open even for the "radical" never-push-your-own-changes
>> solution.
>>
>> I think we could even try it this way for the 1.5.1 release. If it seems
>> that unhandled pull requests start to accumulate (which I don't think
>> will happen), we could just reverse the policy.
>
> OK - right - that's the second big issue and obviously that's at the
> heart of thing.  I think that splits into two in fact:
>
> i) How often to merge into trunk
> ii) Who should merge into trunk
>
> At the extreme, you have the SVN model where the answers are:
>
> i) a merge for almost every commit
> ii) by the person who wrote the code
>
> and I thought that we'd decided that we didn't want that because trunk
> started getting unpredictable and painful to maintain.
>
> At the other end is the more standard DVCS-type workflow:
>
> i) merges by branches (which might have only a few commits)
> ii) by small team of people who are responsible for overseeing trunk.
> And rarely by the person who wrote the code
>
> So - is that a reasonable summary?
>
> Does anyone disagree with Pauli's never-push-your-own-changes suggestion?

I think it is a little too extreme for trivial changes like one-liner 
and the likes, but I think it is a good default rule (that is if you are 
not sure, don't push your own changes).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Development workflow

2010-10-12 Thread David
On 10/12/2010 06:48 PM, Pierre GM wrote:
>
> On Oct 12, 2010, at 11:32 AM, Matthew Brett wrote:
>>
>> I think the only possible lesson that might be drawn is that it
>> probably would have helped you as it has certainly helped me, to have
>> someone scan the set of changes and comment - as part of the workflow.
>
> That, and a nice set of instructions...
>
> Till I'm at it, would there be anybody patient to hold my hand and tell me 
> how to backport changes from one branch to another ?

This is easy:

1. go into the target branch
2. git cherry-pick commit_to_backport

The -x option to cherry-pick may be used to mention the original commit 
message in the backported one. By default, cherry-pick commits for you 
(you can use -n to avoid auto-committing).

> Corollary: how do I branch from a branch ?

You use the branch command:

git branch target_branch source_branch

But generally, if you want to create a new branch to start working on 
it, you use the -b option of checkout:

git branch -b target_branch source_branch

which is equivalent to

git branch target_branch source_branch
git checkout target_branch

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy installation in ubuntu

2010-10-18 Thread David
On 10/18/2010 10:45 PM, Pauli Virtanen wrote:
> Mon, 18 Oct 2010 09:07:42 -0400, Ian Goodfellow wrote:
>
>> To do a standard installation, run
>> sudo python setup.py install
>> from inside the numpy directory
>
> Preferably,
>
>   sudo python setup.py install --prefix=/usr/local
>
> and then you don't mess up your package manager.

Ubuntu actually does this by default (i.e. the default prefix expands to 
/usr/local on ubuntu and debian).

But I think we should tell people to use the --user for python 2.6 and 
above by default: that's the safest method, does not require sudo and 
works on every platform,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Atlas build issues

2010-10-20 Thread David
On 10/20/2010 04:23 PM, Gael Varoquaux wrote:
> I am really sorry to be landing on the mailing list with Atlas build
> issues. I usually manage be myself to build Atlas, but this time I have
> been fighting for a couple of days with little success.
>
> The reason I need to build Atlas is that our work computers are stuck on
> Mandriva 2008.0, in which the version of Atlas packaged by the system is
> not usable.
>
> Anyhow, I lost quite a while with Atlas 3.9.26, for which I was never
> able to build libaries that could be used in a '.so' (seemed that the
> -fPIC was not working, but really I don't understand what was going on in
> the maze of makefiles). After that I switched to 3.9;25, which I have
> already have gotten working on another system (Mandriva 2008, but 32
> bits). Know everything builds, but I get a missing symbol at the import:
>
> from numpy.linalg import lapack_lite
> ImportError: /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so:
> undefined symbol: zgesdd_

The two first things to check in those cases (atlas or not):

- ldd /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so: does it 
load the libraries you think are you loading ?
- nm atlas_libraries | grep zgesdd_ for every library in atlas (I don't 
know how the recent ones work, but this function should normally be in 
libf77blas.so)

The way to get -fPIC everywhere used to be -Fa alg -fPIC during 
configure, but that may have changed.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Atlas build issues

2010-10-20 Thread David
On 10/20/2010 04:39 PM, David wrote:
> On 10/20/2010 04:23 PM, Gael Varoquaux wrote:
>> I am really sorry to be landing on the mailing list with Atlas build
>> issues. I usually manage be myself to build Atlas, but this time I have
>> been fighting for a couple of days with little success.
>>
>> The reason I need to build Atlas is that our work computers are stuck on
>> Mandriva 2008.0, in which the version of Atlas packaged by the system is
>> not usable.
>>
>> Anyhow, I lost quite a while with Atlas 3.9.26, for which I was never
>> able to build libaries that could be used in a '.so' (seemed that the
>> -fPIC was not working, but really I don't understand what was going on in
>> the maze of makefiles). After that I switched to 3.9;25, which I have
>> already have gotten working on another system (Mandriva 2008, but 32
>> bits). Know everything builds, but I get a missing symbol at the import:
>>
>> from numpy.linalg import lapack_lite
>> ImportError: /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so:
>> undefined symbol: zgesdd_
>
> The two first things to check in those cases (atlas or not):
>
>   - ldd /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so: does it
> load the libraries you think are you loading ?
>   - nm atlas_libraries | grep zgesdd_ for every library in atlas (I don't
> know how the recent ones work, but this function should normally be in
> libf77blas.so)

Sorry, it should be in lapack, not the f77 blas wrapper,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.5.1 release candidate 1

2010-10-25 Thread David
On 10/26/2010 08:47 AM, Ralf Gommers wrote:
> On Tue, Oct 26, 2010 at 12:35 AM, René Dudfield  wrote:
>> hi,
>>
>> this is another instance of a bug caused by the 'no old file handling'
>> problem with distutils/numpy.
>
> You're talking about the datetime tests below? They don't exist in
> 1.5.x, that's simply a file left over from an install of the master
> branch.

I think that's what he meant (although the issue has nothing to do with 
numpy and is solely caused by distutils),

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Precision difference between dot and sum

2010-11-01 Thread David
On 11/02/2010 08:30 AM, Joon wrote:
> Hi,
>
> I just found that using dot instead of sum in numpy gives me better
> results in terms of precision loss. For example, I optimized a function
> with scipy.optimize.fmin_bfgs. For the return value for the function, I
> tried the following two things:
>
> sum(Xb) - sum(denominator)
>
> and
>
> dot(ones(Xb.shape), Xb) - dot(ones(denominator.shape), denominator)
>
> Both of them are supposed to yield the same thing. But the first one
> gave me -589112.30492110562 and the second one gave me -589112.30492110678.

Those are basically the same number: the minimal spacing between two 
double floats at this amplitude is ~ 1e-10 (given by the function 
np.spacing(the_number)), which is essentially the order of magnitude of 
the difference between your two numbers.

> I was wondering if this is well-known fact and I'm supposed to use dot
> instead of sum whenever possible.

You should use dot instead of sum when application, but for speed 
reasons, essentially.

>
> It would be great if someone could let me know why this happens.

They don't use the same implementation, so such tiny differences are 
expected - having exactly the same solution would have been surprising, 
actually. You may be surprised about the difference for such a trivial 
operation, but keep in mind that dot is implemented with highly 
optimized CPU instructions (that is if you use ATLAS or similar library).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Path to numpy installation

2010-11-02 Thread David
On 11/02/2010 05:31 PM, Juanjo Gomez Navarro wrote:
>   Ok, so in your opinion I have two independent python installations?
> That's possible... The problem is that I want to use ipython, and this
> interpreter seems to take the wrong version by default...
>
> Do you think it is safe just to delete the folder
> /System/Library/Frameworks/Python.framework to «uninstall» the wrong
> version?

Not really - /System is used for system stuff, as its name suggests, and 
removing it may break unrelated things.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ~2**32 byte tofile()/fromfile() limit in 64-bit Windows?

2010-11-03 Thread David
On 11/04/2010 09:18 AM, Ralf Gommers wrote:

> To me it seems tofile/save being broken for some use cases is a more
> serious problem than the other patches proposed. Also the change only
> affects 64-bit Windows. So I think it can go in if it's merged in time
> (i.e. within a couple of days). I'd prefer someone more familiar with
> that particular code to review and merge it.

I don't think the code is appropriate as is. I can take a look at it 
this WE, but not before,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone with Core i7 and Ubuntu 10.04?

2010-11-08 Thread David
Hi Ian,

On 11/08/2010 11:18 PM, Ian Goodfellow wrote:
> I'm wondering if anyone here has successfully built numpy with ATLAS
> and a Core i7 CPU on Ubuntu 10.04. If so, I could really use your
> help. I've been trying since August (see my earlier messages to this
> list) to get numpy running at full speed on my machine with no luck.

Please tell us what error you got - saying that something did not 
working is really not useful to help you. You need to say exactly what 
fails, and which steps you followed before that failure.

> The Ubuntu packages don't seem very fast, and numpy won't use the
> version of ATLAS that I compiled. It's pretty sad; anything that
> involves a lot of BLAS calls runs slower on this 2.8 ghz Core i7 than
> on an older 2.66 ghz Core 2 Quad I use at work.

One simple solution is to upgrade to ubuntu 10.10, which has finally a 
working atlas package, thanks to the work of the debian packagers. There 
is a version compiled for i7,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new memcpy implementation exposes bugs in some software.

2010-11-11 Thread David
On 11/11/2010 10:54 AM, Charles R Harris wrote:
> Hi All,
>
> Apparently the 64 bit version of memcpy in the Fedora 14 glibc will do
> the copy in the downwards rather than the usual upwards direction on
> some processors. This has exposed bugs where the the source and
> destination overlap in memory. Report and discussion can be found at
> fedora bugzilla <https://bugzilla.redhat.com/show_bug.cgi?id=638477>. I
> don't know that numpy has any problems of this sort but it is worth
> keeping in mind.

It would be kind of cool to get a bug report from Linus, though :)

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.5.1 on RedHat 5.5

2010-11-29 Thread David
On 11/30/2010 03:32 AM, David Brodbeck wrote:
> I'm trying to install NumPy 1.5.1 on RedHat 5.5 and I'm having trouble
> getting it to find the ATLAS libraries.  I did a lot of Googling but
> didn't find anything that helped...also looked through the install
> instructions, but they focus mainly on Ubuntu.
>
> The problem I'm having is it's looking for the libraries in the right
> location, but not finding them.  e.g.:
>
> atlas_blas_info:
>libraries f77blas,cblas,atlas not found in /opt/python-2.5/lib
>libraries f77blas,cblas,atlas not found in /usr/local/lib64
>libraries f77blas,cblas,atlas not found in /usr/local/lib
>libraries f77blas,cblas,atlas not found in /usr/lib64/atlas
>libraries f77blas,cblas,atlas not found in /usr/lib64/sse2
>libraries f77blas,cblas,atlas not found in /usr/lib64
>libraries f77blas,cblas,atlas not found in /usr/lib/sse2
>libraries f77blas,cblas,atlas not found in /usr/lib
>NOT AVAILABLE
>
> ...and yet...
>
> bro...@patas:~/numpy-1.5.1$ locate libf77blas
> /usr/lib64/atlas/libf77blas.so.3
> /usr/lib64/atlas/libf77blas.so.3.0
> bro...@patas:~/numpy-1.5.1$ locate libcblas
> /usr/lib64/atlas/libcblas.so.3
> /usr/lib64/atlas/libcblas.so.3.0
> bro...@patas:~/numpy-1.5.1$ locate libatlas
> /usr/lib64/atlas/libatlas.so.3
> /usr/lib64/atlas/libatlas.so.3.0

the *.so.N.M are enough for binaries, but you need the *.so to link 
against a library. Those are generally provided in the -devel RPMS on RH 
distributions,

cheers,

David
>
> So the libraries are there, and they're where NumPy is looking for
> them, but it's still not finding them?  Clearly I'm missing something,
> but I'm not sure what.
>

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A Cython apply_along_axis function

2010-12-01 Thread David
Hi Keith,

On 12/02/2010 04:47 AM, Keith Goodman wrote:
> It's hard to write Cython code that can handle all dtypes and
> arbitrary number of dimensions. The former is typically dealt with
> using templates, but what do people do about the latter?

The only way that I know to do that systematically is iterator. There is 
a relatively simple example in scipy/signal (lfilter.c.src).

I wonder if it would be possible to add better support for numpy 
iterators in cython...

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MultiIter version of PyArray_IterAllButAxis ?

2010-12-01 Thread David
On 12/02/2010 12:35 PM, John Salvatier wrote:
> Hello,
>
> I am writing a UFunc creation utility, and I would like to know: is
> there a way to mimic the behavior ofPyArray_IterAllButAxis for multiple
> arrays at a time?

Is there a reason why creating a separate iterator for each array is not 
possible ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Refactor fork uses the ./configure, make, make install process.

2010-12-05 Thread David
On 12/05/2010 05:57 AM, Dag Sverre Seljebotn wrote:
> On 12/04/2010 09:11 PM, Charles R Harris wrote:
>>
>>
>> On Sat, Dec 4, 2010 at 12:59 PM, Ilan Schnell > <mailto:ischn...@enthought.com>> wrote:
>>
>> Yes, numpy-refactor builds of top of libndarray. The whole point
>> was that the libndarray is independent of the interface, i.e. the
>> CPython or the IronPython interface, and possibly other (Jython)
>> in the future.
>> Looking at different building/packaging solutions for libndarray,
>> autoconf make things very easy, it's a well established pattern,
>> I'm sure David C. will agree.
>>
>>
>>
>> I know he has expressed reservations about it on non-posix platforms
>> and some large projects have moved away from it. I'm not saying it
>> isn't the best short term solution so you folks can get on with the
>> job, but it may be that long term we will want to look elsewhere.
>
> Such as perhaps waf for building libndarray, which seems like it will be
> much easier to make work nicely with Bento etc. than autoconf (again,
> speaking long-term).
>
> Also, it'd be good to avoid a seperate build system for Windows (problem
> of keeping changes sync-ed with Visual Studio projects etc. etc.).

Is support for visual studio projects a requirement for the refactoring 
? If so, the only alternative to keeping changes in sync is to be able 
to generate the project files from a description, which is not so easy 
(and quite time consuming). I know of at least two tools doing that: 
cmake and gpy (the build system used for chrome).

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NEP for faster ufuncs

2010-12-21 Thread David
Hi Mark,

On 12/22/2010 09:53 AM, Mark Wiebe wrote:
> Hello NumPy-ers,
>
> After some performance analysis, I've designed and implemented a new
> iterator designed to speed up ufuncs and allow for easier
> multi-dimensional iteration.  The new code is fairly large, but works
> quite well already.  If some people could read the NEP and give some
> feedback, that would be great!  Here's a link:
>
> https://github.com/m-paradox/numpy/blob/mw_neps/doc/neps/new-iterator-ufunc.rst

This looks pretty cool. I hope to be able to take a look at it during 
the christmas holidays.

I cannot comment in details yet, but it seems to address several issues 
I encountered myself while implementing the neighborhood iterator (which 
I will try to update to use the new one).

One question: which CPU/platform did you test it on ?

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy for jython

2010-12-23 Thread David
On 12/24/2010 07:27 AM, John Salvatier wrote:
> I'm curious whether this kind of thing is expected to be relatively easy
> after the numpy refactor.

It would help, but it won't make it easy. I asked this exact question 
some time ago to Enthought developers, and java would be more 
complicated because there is no equivalent to C++/CLI in java world. 
Don't take my word for it, though, because I know very little about ways 
to wrap native code on the jvm (or CLR for that matter).

I think more than one person is interested, though (I for one am more 
interested in the JVM than the CLR),

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 2.0 schedule

2011-01-25 Thread David
On 01/26/2011 01:42 AM, Charles R Harris wrote:
> Hi All,
>
> Just thought it was time to start discussing a release schedule for
> numpy 2.0 so we have something to aim at. I'm thinking sometime in the
> period April-June might be appropriate. There is a lot coming with the
> next release: the Enthought's numpy refactoring, Mark's float16 and
> iterator work, and support for IronPython. How do things look to the
> folks involved in those projects?

One thing which I was wondering about numpy 2.0: what's the story for 
the C-API compared to 1.x for extensions. Is it fundamentally different 
so that extensions will need to be rewritten ? I especially wonder about 
scipy and cython's codegen backend,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please keep Numpy master working on Py3

2011-02-01 Thread David
On 02/02/2011 08:58 AM, Pauli Virtanen wrote:
> Hi,
>
> The master branch did not build today on Python 3. Please make sure that
> your code works correctly also on Python 3, before pushing it.
>
>  ***
>
> I mostly fixed the stuff for now, mostly just the usual bytes vs unicode.
>
> On Python 3, the tests however give two non-obvious failures -- I'm not
> sure if it's just a matter of the raised exception having a different
> type on Py2 vs Py3, or if it reflects something going wrong. Mark, do you
> have ideas?

Following the merge on Mark's code, I am stlightly concerned about the 
dependency between ufunc and multiarray (i.e. ufunc include header in 
multiarray).

In the meantime, I put the relevant header in numpy/core/src/private, to 
make the dependency clearer.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please keep Numpy master working on Py3

2011-02-01 Thread David
On 02/02/2011 01:53 PM, David wrote:
> On 02/02/2011 08:58 AM, Pauli Virtanen wrote:
>> Hi,
>>
>> The master branch did not build today on Python 3. Please make sure that
>> your code works correctly also on Python 3, before pushing it.
>>
>>   ***
>>
>> I mostly fixed the stuff for now, mostly just the usual bytes vs unicode.
>>
>> On Python 3, the tests however give two non-obvious failures -- I'm not
>> sure if it's just a matter of the raised exception having a different
>> type on Py2 vs Py3, or if it reflects something going wrong. Mark, do you
>> have ideas?
>
> Following the merge on Mark's code, I am stlightly concerned about the
> dependency between ufunc and multiarray (i.e. ufunc include header in
> multiarray).
>
> In the meantime, I put the relevant header in numpy/core/src/private, to
> make the dependency clearer.

Following that argument, there are other unwanted dependencies between 
multiarray and ufunc, causing circular dependencies. I don't think they 
were there before, and it makes building numpy with a dependency-based 
tool like scons or waf extremely difficult.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please keep Numpy master working on Py3

2011-02-01 Thread David
On 02/02/2011 02:57 PM, Mark Wiebe wrote:
> On Tue, Feb 1, 2011 at 9:49 PM, David  <mailto:da...@silveregg.co.jp>> wrote:
>
> 
>
>
>  > In the meantime, I put the relevant header in
> numpy/core/src/private, to
>  > make the dependency clearer.
>
> Following that argument, there are other unwanted dependencies between
> multiarray and ufunc, causing circular dependencies. I don't think they
> were there before, and it makes building numpy with a dependency-based
> tool like scons or waf extremely difficult.
>
>
> This particular separation of two components felt somewhat artificial to
> me.  For the iterator buffering, as an example, I initially thought it
> would be good to use the same default as the ufuncs do.  Instead I ended
> up just using a hard-coded default.

Not sure I understand the exact argument, but if it is just a matter of 
getting default values, it is easy to share them using a common header.

> I think the error handling policy
> in the ufuncs could also be useful for computations in the core.

At the moment, ufunc and multiarray are separate extensions. If each 
depends on the other API-wise, it is an issue. If there are some 
commonalities, then they can be put in a separate extensions. They are 
already way too big as they are (historical reasons), we should really 
fight this for maintanability.

I realize the code organization and the build stuff is a bit of a mess 
at the moment, so if you have questions on that aspect, feel free to ask 
clarification or help,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] core library structure

2011-02-03 Thread David
On 02/04/2011 06:50 AM, Travis Oliphant wrote:
> I like the thoughts on core-architecture. These are all things that we
> were not able to do as part of the NumPy for .NET discussions, but with
> the right interested parties could be acted upon.

I will be at Pycon this year from 11th to 13th february, the whole 3 
conf days. So if anyone else from the team can make it, we can discuss 
this further face to face.

As for the changes being discussed, I think it would be beneficial to 
have a strategy on how to deal with new features and the release 
roadmap. I am a bit worried about feature creap and a too ambitious 
numpy 2.0 release. Can we prepare a 2.0 with refactoring as a good basis 
for ongoing development, how does the .net "fork" fits into this 
picture, etc... ?

Like everyone, I am excited by those discussions about new feature, 
design, but I would suggest that we get a basic agreement on the main 
goals of numpy 2.0, what can be delayed, what cannot.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] moving C++ binary data to numpy

2011-02-07 Thread David
On 02/08/2011 03:52 AM, Ilya Shlyakhter wrote:
> Is there a portable way to save vector of C++ structs to a binary
> file

C++ structs are not portable, so that sounds difficult. In practice, you 
have compiler-specific ways to enforce some alignement within a 
structure, but that sounds rather nightmarish to support.

Why can't you use a library with a portable file format (e.g. hdf5), or 
something like google protocol buffer/thrift/etc... ? Those are designed 
to solve your problem

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] FYI: experimental waf support for numpy

2011-02-09 Thread David
Hi there,

Following recent release of waf 1.6 and its adoption by the samba 
project, as well as my own work on integrating waf and bento, I have 
spent some time to build numpy with it. Although this is experimental, 
it should be possible for people to play with it:

https://github.com/cournape/numpy/tree/waf_build

You need a recent waf checkout (trunk), and you build numpy as follows:

$waf_checkout/waf-light configure
$waf_checkout/waf-light build

Waf is python build system, a bit like scons, but smaller, cleaner, much 
faster and somehow easier to integrate as a library

The added code in numpy is ~ 1200 lines of code, of which almost half is 
just to get missing configuration from waf, IOW, the whole numpy build 
is described in ~ 600 lines of code, and works on linux, mac os x and 
windows (Visual Studio only for now).

Besides the maintenance advantage, the waf build has a few interesting 
features:
   - < 1 sec no-op build with dependencies tracking (header changes, 
file content change, etc...)
   - ~ 10 sec debug build on a 2 years old machine
   - very easy to customize with alternative blas/lapack libraries, or 
use alternative compilers like clang.
   - nice output and full log written when a configuration error has 
occured, for easier build debugging.

Note that this cannot be used to install numpy, and is of interest only 
for developers ATM - the goal is to integrate this with bento to get a 
robust build and deployment framework for numpy, scipy and beyond.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Rationale for simple_capsule_dtor to be static but non-inline ?

2011-02-09 Thread David
Hi,

in npy3_compat.h, one function simple_capsule_dtor is defined as static 
but non-inline. AFAIK, there is no reason not to put inline (if 
supported by the compiler of course) for a static function defined in a 
header. Unless I hear someone justify it, I will change it,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FYI: experimental waf support for numpy

2011-02-10 Thread David
On 02/10/2011 04:45 PM, Sebastien Binet wrote:
> David,
>
> On Thu, 10 Feb 2011 14:30:37 +0900, David  wrote:
>> Hi there,
>>
>> Following recent release of waf 1.6 and its adoption by the samba
>> project, as well as my own work on integrating waf and bento, I have
>> spent some time to build numpy with it. Although this is experimental,
>> it should be possible for people to play with it:
>>
>> https://github.com/cournape/numpy/tree/waf_build
>
> great news !
>
>>
>> You need a recent waf checkout (trunk), and you build numpy as follows:
>>
>> $waf_checkout/waf-light configure
>> $waf_checkout/waf-light build
> why don't you just put the waf executable within the numpy repository ?

Because it is still experimental, and there are some issues in waf 
around tools on windows, etc... meaning I will have to follow trunk 
pretty closely after those bugs are fixed.

This is also not intended to be merged. The goal really is to discover 
as many issues as possible before I decide to use waf exclusively as the 
build "backend" in bento instead of my own (if you are interested, there 
is already a proof of concept of building simple extensions with waf 
inside bento in bento repo: 
https://github.com/cournape/Bento/tree/master/examples/hooks/waf/waf16)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy speed tests by NASA

2011-02-22 Thread David
On 02/23/2011 05:45 AM, Sturla Molden wrote:
> I came accross some NumPy performance tests by NASA. Comparisons against
> pure Python, Matlab, gfortran, Intel Fortran, Intel Fortran with MKL,
> and Java. For those that are interested, it is here:

This is mostly a test of the blas/lapack used, so it is not very useful 
IMO, except maybe to show that you can deal with non trivial problems on 
top of python (surprisingly, many scientists programming a fair bit are 
still unaware of the vectorizing concept altogether),

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [python] Re: Request for advice: project to get NumPy working in IronPython

2008-04-26 Thread David
Michael Foord  voidspace.org.uk> writes:
> > are you really going to run matrix inversion on the CLR in a browser?)

Yes!




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining custom types

2006-11-17 Thread David Douard
On Thu, Nov 16, 2006 at 01:28:25PM -0600, Jonathan Wang wrote:
> Hi all,
> 
> I've gotten to the point where Numpy recognizes the objects (represented as
> doubles), but I haven't figured out how to register ufunc loops on the
> custom type. It seems like Numpy should be able to check that the scalarkind
> variable in the numpy type descriptor is set to float and use the float
> ufuncs on the custom object. Barring that, does anyone know if the symbols
> for the ufuncs are publicly accessible (and where they are) so that I can
> register them with Numpy on the custom type?
> 
> As for sharing code, I've been working on this for a project at work. There
> is a possibility that it will be released to the Numpy community, but that's
> not clear yet.

Hi,
besides the fact that there seem to have several people interested in
this piece of code, I'm pretty sure some of us might help you with this
code. 
Maybe you should tell your employer how far your project
could benefits from being released to the numpy community (at least this
part of the project) ;-)


> 
> Thanks,
> Jonathan
> 
> On 11/16/06, Matt Knox <[EMAIL PROTECTED]> wrote:
> >
> >> On Thursday 16 November 2006 11:44, David Douard wrote:
> >> > Hi, just to ask you: how is the work going on encapsulatinsg
> >mx.DateTime
> >> > as a native numpy type?
> >> > And most important: is the code available somewhere? I am also
> >> > interested in using DateTime objects in numpy arrays. For now, I've
> >> > always used arrays of floats (using gmticks values of dates).
> >
> >> And I, as arrays of objects (well, I wrote a subclass to deal with
> >dates,
> >> where each element is a datetime object, with methods to translate to
> >floats
> >> or strings , but it's far from optimal...). I'd also be quite interested
> >in
> >> checking what has been done.
> >
> >I'm also very interested in the results of this. I need to do something
> >very similar and am currently relying on an ugly hack to achieve the 
> >desired
> >result.
> >
> >- Matt Knox
> >
> >-
> >Take Surveys. Earn Cash. Influence the Future of IT
> >Join SourceForge.net's Techsay panel and you'll get the chance to share
> >your
> >opinions on IT & business topics through brief surveys - and earn cash
> >http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> >
> >___
> >Numpy-discussion mailing list
> >Numpy-discussion@lists.sourceforge.net
> >https://lists.sourceforge.net/lists/listinfo/numpy-discussion
> >
> >
> >

> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> Numpy-discussion mailing list
> Numpy-discussion@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/numpy-discussion


-- 
David Douard LOGILAB, Paris (France)
Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations
Développement logiciel sur mesure :  http://www.logilab.fr/services
Informatique scientifique :  http://www.logilab.fr/science


signature.asc
Description: Digital signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to deal with NAN and co in C extension to numpy/scipy ?

2006-11-22 Thread David Cournapeau
Hi,

I am about to release some code to compute LPC coefficients (Auto 
regressive modeling, using Levinson Durbin algorithm for autocorrelation 
matrix inversion), and I was wondering how to handle cases of divide by 
0, or values really near 0.

I understand this is a complex issue, and I admittedly have no 
knowledge about it except some generalities; there are versions of 
Levinson Durbin which handles bad condition numbers (for people not 
familiar with LPC coding, levinson durbin is a recursive algorithm to 
inverse a symmetric Toeplitz matrix; the complexity becomes O(N^2) 
instead of O(N^3)); but they are overkill for LPC coding, I think.

The options I see are:

- I don't care :)
- Detecting values near zero, finish the computation anyway in C, 
and returns an error code to python, which would emit a warning.
- Detecting values near zero, returns an error code which would 
results in a python exception ?

In this latter cases, is it ok to returns to python arrays the 
values from the C function ? In matlab, for bad cases, it just returns 
NAN; is this appropriate ? How should I do it ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy and Python 2.2 on RHEL3

2006-12-05 Thread David Bogen
All:

Is it possible to build Numpy using Python 2.2?  I haven't been able to
find anything that explicitly lists the versions of Python with which
Numpy functions so I've been working under the assumption that the two
bits will mesh together somehow.

When I try to build Numpy 1.0.1 on RedHat Enterprise Linux 3 using
Python 2.2.3, I get the following error:

$ /usr/bin/python2.2 setup.py build
Running from numpy source directory.
Traceback (most recent call last):
  File "setup.py", line 89, in ?
setup_package()
  File "setup.py", line 59, in setup_package
from numpy.distutils.core import setup
  File "numpy/distutils/__init__.py", line 5, in ?
import ccompiler
  File "numpy/distutils/ccompiler.py", line 11, in ?
import log
  File "numpy/distutils/log.py", line 4, in ?
from distutils.log import *
ImportError: No module named log

Through extensive trial and error I've been able to hack the distutils
files enough to make that error go away, but then I start getting an
error describing an invalid syntax with the directive "yield os.path"
which seems to be a deeper, more complex error to fix.

Am I attempting the impossible here or am I just doing something
fundamentally and obviously wrong?

David

-- 
David Bogen   :: (608) 263-0168
Unix SysAdmin :: IceCube Project
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and Python 2.2 on RHEL3

2006-12-05 Thread David Bogen
Bill Spotz wrote:
> 
> you might try
> 
>  from __future__ import generators
> 
Some research did turn up that alternative, but then I started getting
this error:

$ /usr/bin/python2.2 setup.py build
Running from numpy source directory.
Traceback (most recent call last):
  File "setup.py", line 90, in ?
setup_package()
  File "setup.py", line 60, in setup_package
from numpy.distutils.core import setup
  File "numpy/distutils/__init__.py", line 5, in ?
import ccompiler
  File "numpy/distutils/ccompiler.py", line 12, in ?
from exec_command import exec_command
  File "numpy/distutils/exec_command.py", line 56, in ?
from numpy.distutils.misc_util import is_sequence
  File "numpy/distutils/misc_util.py", line 12, in ?
from sets import Set as set
ImportError: No module named sets

Given the number of walls I was hitting, it just seemed that I was
traveling down the wrong path.

David

-- 
David Bogen   :: (608) 263-0168
Unix SysAdmin :: IceCube Project
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Resizing without allocating additional memory

2006-12-06 Thread David Huard

Hi,

I have fortran subroutines wrapped with f2py that take arrays as arguments,
and I often need to use resize(a, N) to pass an array of copies of an
element. The resize call , however, is becoming the speed bottleneck, so my
question is:
Is it possible to create an (1xN) array from a scalar without allocating
additional memory for the array, ie just return a new "view" of the array
where all elements point to the same scalar.

Thanks,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Resizing without allocating additional memory

2006-12-06 Thread David Huard

Thanks Travis,

I guess we'll have to tweak the fortran subroutines. It would have been neat
though.

David

Answer: Since g+=1 adds one to all N elements of g, the buffer a gets
incremented N times.
So
a = array(i)
g = ndarray(shape=(1,N), dtype=int, buffer=a, strides=(0,0))
g+=M

returns i + M*N



2006/12/6, Travis Oliphant <[EMAIL PROTECTED]>:


David Huard wrote:

> Hi,
>
> I have fortran subroutines wrapped with f2py that take arrays as
> arguments, and I often need to use resize(a, N) to pass an array of
> copies of an element. The resize call , however, is becoming the speed
> bottleneck, so my question is:
> Is it possible to create an (1xN) array from a scalar without
> allocating additional memory for the array, ie just return a new
> "view" of the array where all elements point to the same scalar.
>
I don't think this would be possible in Fortran because Fortran does not
provide a facility for using arbitrary striding (maybe later versions of
Fortran using pointers does, though).

If you can use arbitrary striding in your code, then you can construct
such a view using appropriate strides (i.e.  a stride of 0).  You can do
this with the ndarray constructor:


a = array(5)
g = ndarray(shape=(1,10), dtype=int, buffer=a, strides=(0,0))

But, notice you will get interesting results using

g += 1

Explain why the result of this is an array of 15 (Hint:  look at the
value of a).

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] lapack_lite dgesv

2006-12-11 Thread R. David
Hello,

I am trying to use the lapack_lite dgesv routine.

The following sample code :

from numpy import *
[]
a=zeros((nbrows,nbcols),float,order='C')
[]
ipiv=zeros((DIM),int,order='C')
[]
linalg.lapack_lite.dgesv(DIM,1,a,DIM,asarray(ipiv),b,DIM,info)

leads do the followin error message :
lapack_lite.LapackError: Parameter ipiv is not of type PyArray_INT in 
lapack_lite.dgesv

I don't understand the type problem for ipiv !
Indeed, the type of 'a' is OK, ipiv is created the same way than a, but 
something goes
wrong.
Do you have a clue for this ?

Regards,
Romaric
-- 
------
   R. David - [EMAIL PROTECTED]
   Tel. : 03 90 24 45 48  (Fax 45 47)
--
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lapack_lite dgesv

2006-12-11 Thread R. David
Hello Tim,
> 
> The problem is probably your definition of ipiv. "(DIM)" is just a 
> parenthesized scalar, what you probably want is "(DIM,)", which is a 
> one-tuple. Personally, I'd recommend using list notation ("[nbrows, 
> nbcols]", "[DIM]") rather than tuple notation since it's both easier to 
> read and and avoids this type of mistake.

I tried both notations and none work.

In the meantime, I tried extending the ipiv arrays to a 2 dimensionnal
ones (if I had more than on right member for instante), but I still
get the error message.

Romaric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lapack_lite dgesv

2006-12-11 Thread R. David
Hello,


> Try replacing 'int' with intc (or numpy.intc if you are not using 
> 'import *'). The following 'works' for me in the sense that it doesn't 
> throw any errors (although I imagine the results are nonsense):
Thanks, it works now !!

Sorry for non including the whole code, I just not wanted to annoy the whole 
list
with bunchs of code :-)

Regards,
Romaric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] a==b for numpy arrays

2006-12-11 Thread David Huard

Hi Daniel,

Just out of curiosity, what's wrong with
if all(a==b):
  ...

?


Cheers,

David

2006/12/11, Abel Daniel <[EMAIL PROTECTED]>:


>
Hi!

My unittests got broken because 'a==b' for numpy arrays returns an
array instead of returning True or False:

>>> import numpy
>>> a = numpy.array([1, 2])
>>> b = numpy.array([1, 4])
>>> a==b
array([True, False], dtype=bool)

This means, for example:
>>> if a==b:
...   print 'equal'
...
Traceback (most recent call last):
  File "", line 1, in ?
ValueError: The truth value of an array with more than one element is
ambiguous.
Use a.any() or a.all()
>>>


Now, I think that having a way of getting an element-wise comparison
(i.e. getting an array of bools) is great. _But_ why make that the
result of a '==' comparison? Is there any actual code that does, for
example
>>> result_array = a==b
or any variant thereof?

Thanks in advance,
Daniel


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] a==b for numpy arrays

2006-12-11 Thread David Goldsmith
Abel Daniel wrote:
> to what 'a+b' means with a and b being numpy arrays. But 'A=B' means something
> completely different than 'a==b'.
>
>   
I disagree: A=B "on the blackboard" does mean that every element in A 
equals its positionally-corresponding element in B, and a==b in numpy 
will only be wholly true if a=b in the blackboard sense.  As has been 
said by others in this thread, what needs to be adjusted to (and many 
off-the-shelf numerical programs have operated this way for years, so 
it's not like one has to make this adjustment - if they haven't already 
- only if one is using numpy) is what Robert calls "rich comparisons", 
i.e., a comparison of arrays/matrices returns a boolean-valued but 
otherwise similar object, whose elements indicate whether the comparison 
is true or false at each position.  To determine if the comparison 
returns true for every element, all one has to do is use the 'all' 
method - not a huge amount of overhead, and now rather ubiquitous (in my 
experience) throughout the numerical software community (not to mention 
that rich comparison is _much_ more flexible, and in that, powerful).  
Oh, and another convenience method with which you should be aware is 
'any', which returns true if any of the element-wise comparisons are true.

DG
> I tried to dig up something about this "'a==b' return an array" decision from
> the discussion surrounding PEP 207 (on comp.lang.python or on python-dev) but 
> I
> got lost in that thread.
>
>   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Definition of correlation, correlate and so on ?

2006-12-11 Thread David Cournapeau
Hi,

I am polishing some code to compute autocorrelation using fft, and 
when testing the code against numpy.correlate, I realised that I am not 
sure about the definition... There are various function related to 
correlation as far as numpy/scipoy is concerned:
   
numpy.correlate
numpy.corrcoef
scipy.signal.correlate

For me, the correlation between two sequences X and Y at lag t is 
the sum(X[i] * Y*[i+lag]) where Y* is the complex conjugate of Y. 
numpy.correlate does not use the conjugate, scipy.signal.correlate as 
well, and I don't understand numpy.corrcoef. I've never seen complex 
correlation used without the conjugate, so I was curious why this 
definition was used ? It is incompatible with the correlation as a 
scalar product, for example.

Could someone give the definition used by those function ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Definition of correlation, correlate and so on ?

2006-12-12 Thread David Cournapeau
Charles R Harris wrote:
>
>
> On 12/12/06, *David Cournapeau* <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> Hi,
>
> I am polishing some code to compute autocorrelation using fft, and
> when testing the code against numpy.correlate, I realised that I
> am not
> sure about the definition... There are various function related to
> correlation as far as numpy/scipoy is concerned:
>
> numpy.correlate
> numpy.corrcoef
> scipy.signal.correlate
>
> For me, the correlation between two sequences X and Y at lag t is
> the sum(X[i] * Y*[i+lag]) where Y* is the complex conjugate of Y.
> numpy.correlate does not use the conjugate, scipy.signal.correlate as
> well, and I don't understand numpy.corrcoef. I've never seen complex
> correlation used without the conjugate, so I was curious why this
>
>
> Neither have I, it is one of those oddities that may have been 
> inherited from Numeric. I wouldn't mind seeing it changed but it is 
> probably a bit late for that.
Well, I would myself call this a bug, not a feature, unless at least the 
doc specifies the behaviour; the point of my question was to get the 
opinion of other on this point. Anyway, a function to implements the 
'real' cross correlation as defined in signal processing and statistics 
is a must have IMHO,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Definition of correlation, correlate and so on ?

2006-12-12 Thread David Huard


- if not, is this behavior so unexpected as to be considered
  a bug?
- are many existing applications depending on it?

The worst case is:
it is a bug, but many existing users depend on the current behavior.
I am not taking a position, but that seems the current view on this list.
I hope that *if* that is the assessment, then a transition
path will be plotted.  For example, a keyword could be
added, with a proper default, and a warning emitted when it
is not set.



+1 for a change. I'm not using the current implementation. Since it was
undocumented, I prefered coding my own.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Definition of correlation, correlate and so on ?

2006-12-12 Thread David Cournapeau
Tim Hochberg wrote:
>
> So rather than "fixing" the function, I would first propose introducing 
> a function with a more descriptive name and docstring , for example you 
> could steal the name 'xcorr' from matlab. Then if in fact the behavior 
> of correlate is deemed to be an error, deprecate it and start issuing a 
> warning in the next point release, then remove it in the next major release.
That was my idea too: specifiy in the docstring that this does not 
compute the correlation, and put a new function xcorr (or whatever 
name). The good news being this function is already done for rank up to 
2, with basic tests... :)
>
> Even better, IMO,  would be if someone who cares about this stuff pulls 
> together all the related signal processing stuff and moves them to a 
> submodule so we could actually find what signal processing primitives 
> are available. At the same time, more informative docstrings would be a 
> great.
>
Do you mean signal function in numpy or scipy ? For scipy, this is 
already done (module scipy.signals),

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Histograms of extremely large data sets

2006-12-14 Thread David Huard

Hi,

I spent some time a while ago on an histogram function for numpy. It uses
digitize and bincount instead of sorting the data. If I remember right, it
was significantly faster than numpy's histogram, but I don't know how it
will behave with very large data sets.

I attached the file if you want to take a look, or if you me the benchmark,
I'll add it to it and report the results.

Cheers,

David

2006/12/14, eric jones <[EMAIL PROTECTED]>:




Rick White wrote:
> Just so we don't get too smug about the speed, if I do this in IDL on
> the same machine it is 10 times faster (0.28 seconds instead of 4
> seconds).  I'm sure the IDL version uses the much faster approach of
> just sweeping through the array once, incrementing counts in the
> appropriate bins.  It only handles equal-sized bins, so it is not as
> general as the numpy version -- but equal-sized bins is a very common
> case.  I'd still like to see a C version of histogram (which I guess
> would need to be a ufunc) go into the core numpy.
>
Yes, this gets rid of the search, and indices can just be caluclated
from offsets.  I've attached a modified weaved histogram that takes this
approach.  Running the snippet below on my machine takes .118 sec for
the evenly binned weave algorithm and 0.385 sec for Rick's algorithm on
5 million elements.  That is close to 4x  faster (but not 10x...), so
there is indeed some speed to be gained for the common special case.  I
don't know if the code I wrote has a 2x gain left in it, but I've spent
zero time optimizing it.  I'd bet it can be improved substantially.

eric

### test_weave_even_histogram.py

from numpy import arange, product, sum, zeros, uint8
from numpy.random import randint

import weave_even_histogram

import time

shape = 1000,1000,5
size = product(shape)
data = randint(0,256,size).astype(uint8)
bins = arange(256+1)

print 'type:', data.dtype
print 'millions of elements:', size/1e6

bin_start = 0
bin_size = 1
bin_count = 256
t1 = time.clock()
res = weave_even_histogram.histogram(data, bin_start, bin_size, bin_count)
t2 = time.clock()
print 'sec (evenly spaced):', t2-t1, sum(res)
print res


>   Rick
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion




# License: Scipy compatible
# Author: David Huard, 2006
from numpy import *
def histogram(a, bins=10, range=None, normed=False, weights=None, axis=None):
"""histogram(a, bins=10, range=None, normed=False, weights=None, axis=None) 
   -> H, dict

Return the distribution of sample.

Parameters
--
a:   Array sample.
bins:Number of bins, or 
 an array of bin edges, in which case the range is not used.
range:   Lower and upper bin edges, default: [min, max].
normed:  Boolean, if False, return the number of samples in each bin,
 if True, return the density.  
weights: Sample weights. The weights are normed only if normed is True. 
 Should weights.sum() not equal len(a), the total bin count will 
 not be equal to the number of samples.
axis:Specifies the dimension along which the histogram is computed. 
 Defaults to None, which aggregates the entire sample array. 

Output
--
H:The number of samples in each bin. 
  If normed is True, H is a frequency distribution.
dict{
'edges':  The bin edges, including the rightmost edge.
'upper':  Upper outliers.
'lower':  Lower outliers.
'bincenters': Center of bins.
}

Examples

x = random.rand(100,10)
H, Dict = histogram(x, bins=10, range=[0,1], normed=True)
H2, Dict = histogram(x, bins=10, range=[0,1], normed=True, axis=0)

See also: histogramnd
"""

a = asarray(a)
if axis is None:
a = atleast_1d(a.ravel())
axis = 0 

# Bin edges.   
if not iterable(bins):
if range is None:
range = (a.min(), a.max())
mn, mx = [mi+0.0 for mi in range]
if mn == mx:
mn -= 0.5
mx += 0.5
edges = linspace(mn, mx, bins+1, endpoint=True)
else:
edges = asarray(bins, float)

dedges = diff(edges)
decimal = int(-log10(dedges.min())+6)
bincenters = edges[:-1] + dedges/2.

# apply_along_axis accepts only one array input, but we need to pass the 
# weights alo

[Numpy-discussion] slow numpy.clip ?

2006-12-17 Thread David Cournapeau
Hi,

When trying to speed up some matplotlib routines with the matplotlib 
dev team, I noticed that numpy.clip is pretty slow: clip(data, m, M) is 
slower than a direct numpy implementation (that is data[dataM] = M; return data.copy()). My understanding is that the code 
does the same thing, right ?

Below, a small script which shows the difference (twice slower for a 
8000x256 array on my workstation):

import numpy as N

#==
# To benchmark imshow alone
#==
def generate_data_2d(fr, nwin, hop, len):
nframes = 1.0 * fr / hop * len
return N.random.randn(nframes, nwin)

def bench_clip():
m   = -1.
M   = 1.
# 2 minutes (120 sec) of sounds @ 8 kHz with 256 samples with 50 % 
overlap
data= generate_data_2d(8000, 256, 128, 120)

def clip1_bench(data, niter):
for i in range(niter):
blop= N.clip(data, m, M)
def clip2_bench(data, niter):
for i in range(niter):
data[datahttp://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slow numpy.clip ?

2006-12-18 Thread David Cournapeau
Stefan van der Walt wrote:
> Hi David
>
> The benchmark below isn't quite correct.  In clip2_bench the data is
> effectively only clipped once.  I attach a slightly modified version,
> for which the benchmark results look like this:
Yes, I of course mistyped the < and the copy. But the function is still 
moderately faster on my workstation:

  ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10.0030.0033.9443.944 slowclip.py:10(bench_clip)
10.0110.0112.0012.001 slowclip.py:16(clip1_bench)
   101.9900.199    1.9900.199 
/home/david/local/lib/python2.4/site-packages/numpy/core/fromnumeric.py:372(clip)
11.6821.6821.6821.682 slowclip.py:19(clip2_bench)
10.2580.2580.2580.258 
slowclip.py:6(generate_data_2d)
00.000 0.000  profile:0(profiler)

I agree this is not really much a difference, though. The question is 
then, in the context of matplotlib, is there really a need to copy ? 
Because if I do not copy the array before clipping, then the function is 
really faster (for those wondering, this is one bottleneck when calling 
matplotlib.imshow, used in specgram and so on),

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slow numpy.clip ?

2006-12-18 Thread David Cournapeau
Stefan van der Walt wrote:
> On Mon, Dec 18, 2006 at 05:45:09PM +0900, David Cournapeau wrote:
>> Yes, I of course mistyped the < and the copy. But the function is still 
>> moderately faster on my workstation:
>>
>>   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>> 10.0030.0033.9443.944 slowclip.py:10(bench_clip)
>> 10.0110.0112.0012.001 slowclip.py:16(clip1_bench)
>>101.9900.1991.9900.199 
>> /home/david/local/lib/python2.4/site-packages/numpy/core/fromnumeric.py:372(clip)
>> 11.6821.6821.6821.682 slowclip.py:19(clip2_bench)
>> 10.2580.2580.2580.258 
>> slowclip.py:6(generate_data_2d)
>> 00.000 0.000  profile:0(profiler)
>
> Did you try swapping the order of execution (i.e. clip1 second)?
Yes, I tried different orders, etc... and it showed the same pattern. 
The thing is, this kind of thing is highly CPU dependent in my 
experience; I don't have the time right now to update numpy.scipy on my 
laptop, but it happens that profiles results are quite different between 
my workstation (P4 xeon) and my laptop (pentium m).

anyway, contrary to what I thought first, the real problem is the copy, 
so this is where I should investigate in matplotlib case,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   5   6   7   8   9   10   >