On Mar 20, 2014, at 10:39 PM, Robert Kern wrote:
> Sure, but that discussion will (and should) happen on python-ideas.
> When Nathaniel says that "we have been asked" to answer this very
> specific question, he means that literally.
Ah, now I understand.
Thanks!
A
On Mar 20, 2014, at 3:02 PM, Nathaniel Smith wrote:
> - And anyway, my impression is that python-dev will give these other
> possible uses ~zero weight anyway -- if they thought random DSL
> operators were important for their own sake, they would have added @
> long ago :-).
Unlike what you all se
On Mar 20, 2014, at 10:07 AM, Robert Kern wrote:
> I think the operator-overload-as-DSL use cases actually argue somewhat
> for right-associativity. ... Right-associativity adds some diversity
> into the ecosystem and opens up some design space.
You say that like it's a good thing.
My argument is
On Mar 15, 2014, at 4:41 AM, Nathaniel Smith wrote:
> OPTION 1 FOR @: ... "same-left"
> OPTION 2 FOR @: ... "weak-right"
> OPTION 3 FOR @: ... "tight-right"
(In addition to more unusual forms, like 'grouping'.)
There's another option, which is to "refuse the temptation to guess",
and not allow
On Aug 11, 2013, at 10:24 PM, Benjamin Root wrote:
> The idea would be that within numpy (and we should fix SciPy as well), we
> would always import numpy._testing as testing, and not import testing.py
> ourselves.
The problem is the existing code out there which does:
import numpy as np
...
n
[Short version: It doesn't look like my proposal or any
simple alternative is tenable.]
On Aug 10, 2013, at 10:28 AM, Ralf Gommers wrote:
> It does break backwards compatibility though, because now you can do:
>
>import numpy as np
>np.testing.assert_equal(x, y)
Yes, it does.
I realize
On Aug 7, 2013, at 4:37 AM, Charles R Harris wrote:
> I haven't forgotten and intend to look at it before the next release.
Thanks!
On a related topic, last night I looked into deferring the
import for numpy.testing. This is the only other big place
where numpy's import overhead might be reduced
Hi all,
I mostly develop software related cheminformatics. There
isn't much direct overlap between the tools of that field
and NumPy and SciPy provide, but it's increasing with the
use of scikit-learn and pandas.
I tend to write command-line tools which indirectly
import numpy. I've noticed t
On Jul 8, 2012, at 9:22 AM, Scott Sinclair wrote:
> On 6 July 2012 15:48, Andrew Dalke wrote:
>> I followed the instructions at
>> http://docs.scipy.org/doc/numpy/dev/gitwash/patching.html
>> and added Ticket #2181 (with patch) ...
>
> Those instructions need t
I followed the instructions at
http://docs.scipy.org/doc/numpy/dev/gitwash/patching.html
and added Ticket #2181 (with patch) at
http://projects.scipy.org/numpy/ticket/2181
This remove the 5 'exec' calls from polynomial/*.py and improves
the 'import numpy' time by about 25-30%. That is, on my lap
On Jul 3, 2012, at 12:46 AM, David Cournapeau wrote:
> It is indeed irrelevant to your end goal, but it does affect the
> interpretation of what import_array does, and thus of your benchmark
Indeed.
> Focusing on polynomial seems the only sensible action. Except for
> test, all the other stuff se
On Jul 3, 2012, at 12:21 AM, Nathaniel Smith wrote:
> Yes, but for a proper benchmark we need to compare this to the number
> that we would get with some other implementation... I'm assuming you
> aren't proposing we just delete the docstrings :-).
I suspect that we have a different meaning of the
On Jul 2, 2012, at 11:38 PM, Fernando Perez wrote:
> No, that's the wrong thing to test, because it effectively amounts to
> 'import numpy', sicne the numpy __init__ file is still executed. As
> David indicated, you must import multarray.so by itself.
I understand that clarification. However, it
On Jul 2, 2012, at 10:34 PM, Nathaniel Smith wrote:
> I don't have any opinion on how acceptable this would be, but I also
> don't see a benchmark showing how much this would help?
The profile output was lower in that email. The relevant line is
0.038 add_newdocs (numpy.core.multiarray)
This say
On Jul 2, 2012, at 10:33 PM, David Cournapeau wrote:
> On Mon, Jul 2, 2012 at 8:17 PM, Andrew Dalke
> wrote:
>> In July of 2008 I started a thread about how "import numpy"
>> was noticeably slow for one of my customers.
...
>> I managed to get the import time
In this email I propose a few changes which I think are minor
and which don't really affect the external NumPy API but which
I think could improve the "import numpy" performance by at
least 40%. This affects me because I and my clients use a
chemistry toolkit which uses only NumPy arrays, and where
On Oct 13, 2008, at 7:21 AM, Linda Seltzer wrote:
> Is there a moderator on the list to put a stop to these kinds of
> statements?
No.
Andrew
[EMAIL PROTECTED]
___
Numpy-discussion mai
On Oct 12, 2008, at 5:26 PM, Anne Archibald wrote:
> Python is a dynamically-typed language (unlike C), so variables do not
> have type.
Another way to think of it for C people is that all variables
have the same type, which is "reference to Python object."
It's the objects which are typed, and no
On Sep 27, 2008, at 10:23 PM, Anne Archibald wrote:
> Sadly, no such data structure exists
> in either numpy or scipy, though biopython apparently has an
> implementation of kd-trees.
It's not part of scipy but I saw there was a scikit which supported ANN:
http://scipy.org/scipy/scikits/wiki/
On Sep 19, 2008, at 10:04 PM, Christian Heimes wrote:
> Andrew Dalke wrote:
>> There are a few things that Python-the-language guarantees are
>> singleton
>> objects which can be compared correctly with "is".
> The empty tuple () and all interned strings are
On Sep 19, 2008, at 7:52 PM, Christopher Barker wrote:
> I don't know the interning rules, but I do know that you should never
> count on them, then may not be consistent between implementations, or
> even different runs.
There are a few things that Python-the-language guarantees are singleton
obj
On Sep 14, 2008, at 1:00 AM, Jarrod Millman wrote:
> Please test this release ASAP and let us know if there are any
> problems. If there are no show stoppers, this will become the
> 1.2.0 release.
Running on OS X 10.4 , testing with 'import numpy; numpy.test()'
Ran 1726 tests in 7.544s
OK (KNOW
On Sep 15, 2008, at 5:45 AM, David Cournapeau wrote:
> It is not an error for the build, but an error at the configuration
> stage. To get some informations about the platform, we do compile some
> code snippets, and do our configuration depending on whether they fail
> or not. We could log those i
The following seems relevant in this thread.
On a Mac, OS X 10.4, I just rebuilt from SVN and got
compile options: '-Inumpy/core/src -Inumpy/core/include -I/Library/
Frameworks/Python.framework/Versions/2.5/include/python2.5 -c'
gcc: _configtest.c
_configtest.c: In function 'main':
_configtest.c
On Aug 19, 2008, at 1:48 AM, Stéfan van der Walt wrote:
> Wouldn't we want users to have access with
> the doc framework without doing anything special? And, yes, some of
> the documents are empty, but a number of them have already been
> written.
How do users know that those are present? How do
On Aug 19, 2008, at 1:06 AM, Christian Heimes wrote:
> [long posting]
>
> Oh h... what have I done ... *g*
*shrug* I write long emails. I've been told that
by several people. It's probably a bad thing.
> The ideas needs a good PEP. You are definitely up to something. You
> also
> came up wit
On Aug 18, 2008, at 11:22 PM, Christian Heimes wrote:
> Example syntax (rough idea):
>
type(1.0)
>
with float as from decimal import Decimal
type(1.0)
>
When would this "with float ... " considered valid?
For example, could I define things before asking
for a redefinition?
def
On Aug 18, 2008, at 10:01 PM, Ondrej Certik wrote:
> with Andrew permission, I am starting a new thread, where our
> discussion is ontopic. :)
Though I want to point out that without specific proposals
of how the implementation might look, this thread will
not go anywhere as it will be too distan
Andrew Dalke:
> Any chance of someone reviewing my suggestions for
> making the import somewhat faster still?
>
>http://scipy.org/scipy/numpy/ticket/874
>
Travis E. Oliphant:
> In sum: I think 2, 3, 6, 7, 8, and 9 can be done immediately. 1) and
> 4) could be O.K. b
On Aug 18, 2008, at 12:00 AM, Ondrej Certik wrote:
> There is some inconsistency though, for example one can override A() +
> A(), but one cannot override 1 + 1. This could (should) be fixed
> somehow.
That will never, ever change in Python. There's no benefit
to being able to redefine int.__add_
> On Aug 17, 2008, at 10:35 PM, Christian Heimes wrote:
>> Andrew Dalke wrote:
>>> Or write B \circledast C ? (Or \oast?) Try using Google to search
>>> for that character.
>>
>>>>> unicodedata.lookup('CIRCLED ASTERISK OPERATOR')
>>
On Aug 17, 2008, at 9:38 PM, Charles R Harris wrote:
> And here is a bit of unicode just so we can see how it looks for
> various folks.
>
> A = B⊛C
Or write B \circledast C ? (Or \oast?) Try using Google to search
for that character.
Andrew
Gaël Varoquaux wrote:
> Anybody care for '.*'?
That's border-line case, and probably on the bad
idea side because 1.*2 already means something in
normal Python. If added there would be a difference
between
1.*2
and
1 .*2
This problem already exists. Consider
>>> 1 .__str__()
'1'
>>> 1.
Fernando Perez wrote:
> For something as big as this, they would
> definitely want to work off a real pep.
What might be interesting, for those who want to experiment
with this syntax, is to take my Python parser for Python
(python4ply - http://www.dalkescientific.com/Python/python4ply.html )
and
On Aug 16, 2008, at 3:02 AM, Xavier Gnata wrote:
> abs(a-b)http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Aug 15, 2008, at 11:18 PM, Travis E. Oliphant wrote:
> Where would that be, in the C-code? The reason for add_newdocs is to
> avoid writing docstrings in C-code which is a pain.
That was my thought. I could see that the code might useful
during module development, where you don't want text cha
On Aug 15, 2008, at 11:18 PM, Travis E. Oliphant wrote:
> I've removed this loop. Are there other places in numpy.core that
> depend on numpy.lib?
That fixed the loop I identified. I removed the "import lib" in
add_newdocs.py
and things imported fine.
I then commented out the following line
On Aug 15, 2008, at 6:41 PM, Andrew Dalke wrote:
> I don't think it's enough. I don't like environmental
> variable tricks like that. My tests suggest:
>current SVN: 0.12 seconds
>my patch: 0.10 seconds
>removing some top-level imports: 0.09 seconds
&
I forgot to mention..
On Aug 15, 2008, at 9:00 AM, Travis E. Oliphant wrote:
> 1) Removing ctypeslib import
>
> * Can break code if somebody has been doing import numpy and then
> using
> numpy.ctypeslib
> * I'm fine with people needing to import numpy.ctypeslib to use the
> capability as long
On Aug 15, 2008, at 4:38 PM, Pauli Virtanen wrote:
> I think you can still do something evil, like this:
>
> import os
> if os.environ.get('NUMPY_VIA_API', '0') != '0':
> from numpy.lib.fromnumeric import *
> ...
>
> But I'm not sure how many milliseconds must be
Andrew Dalke:
> For reference, a page on using inline and doing so portably:
>
> http://www.greenend.org.uk/rjk/2003/03/inline.html
On Aug 15, 2008, at 9:02 AM, Charles R Harris wrote:
> Doesn't do the trick for compilers that aren't C99 compliant. And
> there are ma
On Aug 15, 2008, at 9:00 AM, Travis E. Oliphant wrote:
> 5) The testing code seems like a lot of duplication to save .01
> seconds
Personally I want to get rid of all in-body test code
and use nosetests or something similar. I know that's
not going to happen at the very least because I was
told
On Aug 15, 2008, at 8:36 AM, Charles R Harris wrote:
> The inline keyword also tends to be gcc/icc specific, although it
> is part of the C99 standard.
For reference, a page on using inline and doing so portably:
http://www.greenend.org.uk/rjk/2003/03/inline.html
On Aug 14, 2008, at 11:07 PM, Alan G Isaac wrote:
> Btw, numpy loads noticeably faster.
Any chance of someone reviewing my suggestions for
making the import somewhat faster still?
http://scipy.org/scipy/numpy/ticket/874
Andrew
Anne Archibald:
> Sadly, it's not possible without extra overhead. Specifically: the
> NaN-ignorant implementation does a single comparison between each
> array element and a placeholder, and decides based on the result
which
> to keep.
Did my example code go through? The test for NaN only
Robert Kern wrote:
> Or we could implement the inner loop of the minimum ufunc to return
> NaN if there is a NaN. Currently it just compares the two values
> (which causes the unpredictable results since having a NaN on either
> side of the < is always False). I would be amenable to that provided
>
On Aug 12, 2008, at 9:54 AM, Anne Archibald wrote:
> Er, is this actually a bug? I would instead consider the fact that
> np.min([]) raises an exception a bug of sorts - the identity of min is
> inf.
That'll break consistency with the normal 'max'
function in Python.
> Really nanmin of an array c
On Aug 12, 2008, at 7:05 AM, Christopher Barker wrote:
> Actually, I think it skips over NaN -- otherwise, the min would always
> be zero if there where a Nan, and "a very small negative number" if
> there were a -inf.
Here's the implementation, from lib/function_base.py
def nanmin(a, axis=None)
On Aug 5, 2008, at 3:53 PM, Alan McIntyre wrote:
> At the moment, bench() doesn't work. That's something I'll try to
> look at this week, but from Friday until the 15th I'm going to be
> driving most of the time and may not get as much done as I'd like.
Thanks for the confirmation.
The import sp
I just added ticket 874
http://scipy.org/scipy/numpy/ticket/874
which on my machine takes the import time from 0.15 seconds down to
0.093 seconds.
A bit over a month ago it was about 0.33 seconds. :)
The biggest trick I didn't apply was to defer importing the re
module, and only compile
On Aug 5, 2008, at 4:19 AM, Robert Kern wrote:
> def test(...):
> ...
> test.__test__ = False
That did it - thanks!
Does "import numpy; numpy.bench()" work for anyone? When I try it I get
[josiah:~] dalke% python -c 'import numpy; numpy.bench()'
---
On Aug 5, 2008, at 2:00 AM, Robert Kern wrote:
> You have old stuff in your checkout/installation. Make sure you have
> deleted all of the *.pycs and directories which have been deleted in
> SVN.
Now that I've fixed that, I can tell that I made a
mistake related to the self-test code. I can't fig
On Aug 5, 2008, at 2:00 AM, Robert Kern wrote:
> You have old stuff in your checkout/installation. Make sure you have
> deleted all of the *.pycs and directories which have been deleted in
> SVN.
I removed all .pyc files, wiped my installation directory, and it
works now as I expect it to work.
I'm working on the patches for reducing the import overhead. I want
to make sure I don't break anything. I'm trying to figure out how to
run all of the tests. I expected, based on the following
Alan McIntyre wrote:
> They actually do two different things; numpy.test() runs test for all
> of
I've got a proof of concept that take the time on my machine to
"import numpy" from 0.21 seconds down to 0.08 seconds. Doing that
required some somewhat awkward things, like deferring all 'import re'
statements. I don't think that's stable in the long run because
people will blithely impo
On Jul 31, 2008, at 10:02 PM, Robert Kern wrote:
> 1) Everything exposed by "from numpy import *" still needs to work.
Does that include numpy.Tester? I don't mean numpy.test() nor
numpy.bench().
Does that include numpy.PackageLoader? I don't mean numpy.pkgload.
> 2) The improvement in impo
I don't see a place to submit patches. Is there a patch manager for
numpy?
Here's a patch to defer importing 'tempfile' until needed. I
previously mentioned one other place that didn't need tempfile. With
this there is no 'import tempfile' during 'import numpy'
This improves startup by a
On Jul 31, 2008, at 12:03 PM, Robert Kern wrote:
> That said, the reason those particular docstrings are verbose is
> because I wanted people to know why those functions exist there (e.g.
> "This is an internal utility function").
Err, umm, you mean that first line of the second paragraph
in t
On Jul 31, 2008, at 11:42 AM, Stéfan van der Walt wrote:
> Maybe when we're convinced that there is a lot to be gained from
> making such a change. From my perspective, it doesn't look good:
>
> I) Major code breakage
> II) Confused users
> III) More difficult function discovery for beginners
I'm
On Jul 31, 2008, at 3:53 AM, David Cournapeau wrote:
> You are supposed to run the tests on an installed numpy, not in the
> sources:
>
> import numpy
> numpy.test(verbose = 10)
Doesn't that make things more cumbersome to test? That is, if I were
to make a change I would need to:
- python se
On Jul 31, 2008, at 4:21 AM, Alan McIntyre wrote:
> They actually do two different things; numpy.test() runs test for all
> of numpy, and numpy.testing.test() runs tests for numpy.testing only.
> There are similar functions in numpy.lib, numpy.core, etc.
Really? This is the code from numpy/__init
On Jul 30, 2008, at 10:51 PM, Alan McIntyre wrote:
> I suppose it's necessary for providing the test() and bench()
> functions in subpackages, but I that isn't a good reason to impose
> upon all users the time required to set up numpy.testing.
I just posted this in my reply to Stéfan, but I'll say
On Jul 30, 2008, at 10:59 PM, Stéfan van der Walt wrote:
> I.e. most people don't start up NumPy all the time -- they import
> NumPy, and then do some calculations, which typically take longer than
> the import time.
Is that interactively, or is that through programs?
> For a benefit of 0.03s, I
On Jul 4, 2008, at 2:22 PM, Andrew Dalke wrote:
> [josiah:numpy/build/lib.macosx-10.3-fat-2.5] dalke% time python -c
> 'pass'
> 0.015u 0.042s 0:00.06 83.3% 0+0k 0+0io 0pf+0w
> [josiah:numpy/build/lib.macosx-10.3-fat-2.5] dalke% time python -c
> 'import numpy
On Jul 10, 2008, at 6:38 PM, Dan Lussier wrote:
> What I'm trying to do is to calculate the total total potential
> energy and coordination number of each atom within a relatively large
> simulation.
Anne Archibald already responded:
> you could try a three-dimensional grid (if your atoms aren't
>
On Jul 3, 2008, at 9:06 AM, Robert Kern wrote:
> Can you try the SVN trunk?
Sure. Though did you know it's not easy to find how to get numpy
from SVN? I had to go to the second page of Google, which linked to
someone's talk.
I expected to find a link to it at http://numpy.scipy.org/ .
Just
2008/7/1 Hanni Ali <[EMAIL PROTECTED]>:
> Would it not be possible to import just the necessary module of
> numpy to
> meet the necessary functionality of your application.
Matthieu Brucher responded:
> IIRC, il you do import numpy.core as numpy, it starts by importing
> numpy, so it will be eve
On Jul 1, 2008, at 2:22 AM, Robert Kern wrote:
> Your use case isn't so typical and so suffers on the import time
> end of the
> balance.
I'm working on my presentation for EuroSciPy. "Isn't so typical"
seems to be a good summary of my first slide. :)
>> Any chance of cutting down on the nu
(Trying again now that I'm subscribed. BTW, there's no link to the
subscription page from numpy.scipy.org .)
The initial 'import numpy' loads a huge number of modules, even when
I don't need them.
Python 2.5 (r25:51918, Sep 19 2006, 08:49:13)
[GCC 4.0.1 (Apple Computer, Inc. build 5341)] on
69 matches
Mail list logo