Re: [Python-Dev] sum(...) limitation

2014-08-13 Thread Ronald Oussoren

On 12 Aug 2014, at 10:02, Armin Rigo  wrote:

> Hi all,
> 
> The core of the matter is that if we repeatedly __add__ strings from a
> long list, we get O(n**2) behavior.  For one point of view, the
> reason is that the additions proceed in left-to-right order.  Indeed,
> sum() could proceed in a more balanced tree-like order: from [x0, x1,
> x2, x3, ...], reduce the list to [x0+x1, x2+x3, ...]; then repeat
> until there is only one item in the final list.  This order ensures
> that sum(list_of_strings) is at worst O(n log n).  It might be in
> practice close enough from linear to not matter.  It also improves a
> lot the precision of sum(list_of_floats) (though not reaching the same
> precision levels of math.fsum()).

I wonder why nobody has mentioned previous year’s discussion of the same issue 
yet: http://marc.info/?l=python-ideas&m=137359619831497&w=2

Maybe someone can write a PEP about this that can be pointed when the question 
is discussed again next summer ;-)

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Mac popups running make test

2015-05-14 Thread Ronald Oussoren

> On 12 May 2015, at 18:14, Tal Einat  wrote:
> 
> On Tue, May 12, 2015 at 4:14 PM, Skip Montanaro
>  wrote:
>> 
>>> Twice now, I've gotten this popup: ...
>> 
>> Let me improve my request, as it seems there is some confusion about
>> what I want. I'm specifically not asking that the popups not be
>> displayed. I don't mind dismissing them. When they appear, I would,
>> however, like to glance over at the stream of messages emitted by the
>> test runner and see a message about it being expected. It seems that
>> the tests which can trigger the crash reporter do this.
> 
> In my case, the popups appear but then disappear within a fraction of
> a second, and this happens about 10-20 times when running the full
> test suite. So I don't have a chance to interact with the popups, and
> this causes test failures.
> 
> Also, when running a large suite of tests, I may not be looking at the
> screen by the time these popups appear. I wouldn't want the tests to
> fail nor would I want the test run to stall.
> 
> I can't test this right now, but does disabling the "network" resource
> avoid these popups? Though even if it does we'll still need a way to
> run network-related tests on OSX.

The only way I know to easily avoid these pop-ups is to turn off the local 
firewall while testing.  

Signing the interpreter likely also works, but probably only when using a paid 
developer certificate.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Investigating time for `import requests`

2017-10-02 Thread Ronald Oussoren
Op 3 okt. 2017 om 04:29 heeft Barry Warsaw  het volgende 
geschreven:

> On Oct 2, 2017, at 14:56, Brett Cannon  wrote:
> 
>> So Mercurial specifically is an odd duck because they already do lazy 
>> importing (in fact they are using the lazy loading support from importlib). 
>> In terms of all of this discussion of tweaking import to be lazy, I think 
>> the best approach would be providing an opt-in solution that CLI tools can 
>> turn on ASAP while the default stays eager. That way everyone gets what they 
>> want while the stdlib provides a shared solution that's maintained alongside 
>> import itself to make sure it functions appropriately.
> 
> The problem I think is that to get full benefit of lazy loading, it has to be 
> turned on globally for bare ‘import’ statements.  A typical application has 
> tons of dependencies and all those libraries are also doing module global 
> imports, so unless lazy loading somehow covers them, it’ll be an incomplete 
> gain.  But of course it’ll take forever for all your dependencies to use 
> whatever new API we come up with, and if it’s not as convenient to write as 
> ‘import foo’ then I suspect it won’t much catch on anyway.
> 

One thing to keep in mind is that imports can have important side-effects. 
Turning every import statement into a lazy import will not be backward 
compatible. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Investigating time for `import requests`

2017-10-20 Thread Ronald Oussoren
Op 10 okt. 2017 om 01:48 heeft Brett Cannon mailto:br...@python.org>> het volgende geschreven:

> 
> 
> On Mon, Oct 2, 2017, 17:49 Ronald Oussoren,  <mailto:ronaldousso...@mac.com>> wrote:
> Op 3 okt. 2017 om 04:29 heeft Barry Warsaw  <mailto:ba...@python.org>> het volgende geschreven:
> 
> > On Oct 2, 2017, at 14:56, Brett Cannon  > <mailto:br...@python.org>> wrote:
> >
> >> So Mercurial specifically is an odd duck because they already do lazy 
> >> importing (in fact they are using the lazy loading support from 
> >> importlib). In terms of all of this discussion of tweaking import to be 
> >> lazy, I think the best approach would be providing an opt-in solution that 
> >> CLI tools can turn on ASAP while the default stays eager. That way 
> >> everyone gets what they want while the stdlib provides a shared solution 
> >> that's maintained alongside import itself to make sure it functions 
> >> appropriately.
> >
> > The problem I think is that to get full benefit of lazy loading, it has to 
> > be turned on globally for bare ‘import’ statements.  A typical application 
> > has tons of dependencies and all those libraries are also doing module 
> > global imports, so unless lazy loading somehow covers them, it’ll be an 
> > incomplete gain.  But of course it’ll take forever for all your 
> > dependencies to use whatever new API we come up with, and if it’s not as 
> > convenient to write as ‘import foo’ then I suspect it won’t much catch on 
> > anyway.
> >
> 
> One thing to keep in mind is that imports can have important side-effects. 
> Turning every import statement into a lazy import will not be backward 
> compatible.
> 
> Yep, and that's a lesson Mercurial shared with me at PyCon US this year. My 
> planned approach has a blacklist for modules to only load eagerly.

I’m not sure if i understand. Do you want to turn on lazy loading for the 
stdlib only (with a blacklist for modules that won’t work that way), or 
generally? In the latter case this would still not be backward compatible. 

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the design purpose of metaclasses vs code generating decorators? (was Re: PEP 557: Data Classes)

2017-10-20 Thread Ronald Oussoren

> On 14 Oct 2017, at 16:37, Martin Teichmann  wrote:
> 
>> Things that will not work if Enum does not have a metaclass:
>> 
>> list(EnumClass) -> list of enum members
>> dir(EnumClass)  -> custom list of "interesting" items
>> len(EnumClass)  -> number of members
>> member in EnumClass -> True or False
>> 
>> - protection from adding, deleting, and changing members
>> - guards against reusing the same name twice
>> - possible to have properties and members with the same name (i.e. "value"
>> and "name")
> 
> In current Python this is true. But if we would go down the route of
> PEP 560 (which I just found, I wasn't involved in its discussion),
> then we could just add all the needed functionality to classes.
> 
> I would do it slightly different than proposed in PEP 560:
> classmethods are very similar to methods on a metaclass. They are just
> not called by the special method machinery. I propose that the
> following is possible:
> 
 class Spam:
> ...   @classmethod
> ...   def __getitem__(self, item):
> ...   return "Ham"
> 
 Spam[3]
>Ham
> 
> this should solve most of your usecases.

Except when you want to implement __getitem__ for instances as well :-). An 
important difference between @classmethod and methods on the metaclass is that 
@classmethod methods live in the same namespace as instance methods, while 
methods on the metaclass don’t.

I ran into similar problems in PyObjC: Apple’s Cocoa libraries use instance and 
class methods with the same name. That when using methods on a metaclass, but 
not when using something similar to @classmethod.  Because of this PyObjC is a 
heavy user of metaclasses (generated from C for additional fun). A major 
disadvantage of this is that tends to confuse smart editors. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: 12 weeks to 3.7 feature code cutoff

2017-11-02 Thread Ronald Oussoren

> On 1 Nov 2017, at 22:47, Ned Deily  wrote:
> 
> Happy belated Halloween to those who celebrate it; I hope it wasn't too 
> scary!  Also possibly scary: we have just a little over 12 weeks remaining 
> until Python 3.7's feature code cutoff, 2018-01-29.  Those 12 weeks include a 
> number of traditional holidays around the world so, if you are planning on 
> writing another PEP for 3.7 or working on getting an existing one approved or 
> getting feature code reviewed, please plan accordingly.If you have 
> something in the pipeline, please either let me know or, when implemented, 
> add the feature to PEP 537, the 3.7 Release Schedule PEP.

I’d still like to finish PEP 447, but don’t know if I can manage to find enough 
free time to do so.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [ssl] The weird case of IDNA

2018-01-02 Thread Ronald Oussoren


> On 31 Dec 2017, at 18:07, Nathaniel Smith  wrote:
> 
> On Dec 31, 2017 7:37 AM, "Stephen J. Turnbull" 
>  > wrote:
> Nathaniel Smith writes:
> 
>  > Issue 1: Python's built-in IDNA implementation is wrong (implements
>  > IDNA 2003, not IDNA 2008).
> 
> Is "wrong" the right word here?  I'll grant you that 2008 is *better*,
> but typically in practice versions coexist for years.  Ie, is there no
> backward compatibility issue with registries that specified IDNA 2003?
> 
> Well, yeah, I was simplifying, but at the least we can say that always and 
> only using IDNA 2003 certainly isn't right :-). I think in most cases the 
> preferred way to deal with these kinds of issues is not to carry around an 
> IDNA 2003 implementation, but instead to use an IDNA 2008 implementation with 
> the "transitional compatibility" flag enabled in the UTS46 preprocessor? But 
> this is rapidly exceeding my knowledge.
> 
> This is another reason why we ought to let users do their own IDNA handling 
> if they want…

Do you know what the major browser do w.r.t. IDNA support? If those 
unconditionally use IDNA 2008 is should be fairly safe to move to that in 
Python as well because that would mean we’re less likely to run into backward 
compatibility issues.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OS-X builds for 3.7.0

2018-02-04 Thread Ronald Oussoren


> On 30 Jan 2018, at 18:42, Chris Barker  wrote:
> 
> Ned,
> 
> It looks like you're still building OS-X the same way as in the past:
> 
> Intel 32+64 bit, 10.6 compatibility
> 
> Is that right?
> 
> Might it be time for an update?
> 
> Do we still need to support 32 bit?  From:
> 
> https://apple.stackexchange.com/questions/99640/how-old-are-macs-that-cannot-run-64-bit-applications
>  
> 
> 
> There has not been a 32 bit-only Mac sold since 2006, and a out-of the box 32 
> bit OS since 2006 or 2007
> 
> I can't find out what the older OS version Apple supports, but I know my IT 
> dept has been making me upgrade, so I"m going to guess 10.8 or newer…

A binary with a newer deployment target than 10.6 would be nice because AFAIK 
the installers are still build on a system running that old version of OSX. 
This results in binaries that cannot access newer system APIs like openat (and 
hence don’t support the “dir_fd” parameter in a number of function in the os 
module.

> 
> And maybe we could even get rid of the "Framework" builds……

Why?  IMHO Framework builds are a nice way to get isolated side-by-side 
installations. Furthermore a number of Apple APIs (including the GUI libraries) 
don’t work unless you’re running from an application bundle, which the 
framework builds arranges for and normal unix builds don’t. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] [RELEASE] Python 3.7.0b1 is now available for testing

2018-02-04 Thread Ronald Oussoren


> On 1 Feb 2018, at 02:34, Ned Deily  wrote:
> 
> […]
> 
> Attention macOS users: with 3.7.0b1, we are providing a choice of
> two binary installers.  The new variant provides a 64-bit-only
> version for macOS 10.9 and later systems; this variant also now
> includes its own built-in version of Tcl/Tk 8.6.  We welcome your
> feedback.
> 

Why macOS 10.9 or later?  MacOS 10.10 introduced a number of useful APIs, in 
particular openat(2) and the like which are exposed using the “dir_fd” 
parameter of functions in the posix module.

That said, macOS 10.9 seems to be a fairly common minimal platform requirement 
these days for developers not tracking Apple’s releases closely.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 08:31 AM, INADA Naoki  wrote:

Hi,

As far as I know, most amd64 and arm64 systems use only 48bit address spaces.
(except [1])

[1] 
https://software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf

It means there are some chance to compact some data structures.
I point two examples below.

My question is; can we use 48bit pointer safely?

Not really, at least some CPUs can also address more memory than that. See 
 which talks about Linux support for 57-bit 
virtual addresses and 52-bit physical addresses. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 03:11 PM, "Joao S. O. Bueno"  wrote:

Not only that, but afaik Linux could simply raise that 57bit virtual
to 64bit virtual without previous
warning on any version change.

The change from 48-bit to 57-bit virtual addresses was not done without any 
warning because that would have broken too much code (IIRC due to at least some 
JS environments assuming 48bit pointers).

Ronald ___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How can we use 48bit pointer safely?

2018-03-30 Thread Ronald Oussoren



On Mar 30, 2018, at 01:40 PM, Antoine Pitrou  wrote:


A safer alternative is to use the *lower* bits of pointers. The bottom
3 bits are always available for storing ancillary information, since
typically all heap-allocated data will be at least 8-bytes aligned
(probably 16-bytes aligned on 64-bit processes). However, you also get
less bits :-)

The lower bits are more interesting to use. I'm still hoping to find some time 
to experiment with tagged pointers some day, that could be interesting w.r.t. 
performance and memory use (at the cost of being ABI incompatible). 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop/deprecate Tkinter?

2018-05-02 Thread Ronald Oussoren


> On 2 May 2018, at 22:51, Ivan Pozdeev via Python-Dev  
> wrote:
> 
> As https://bugs.python.org/issue33257 and https://bugs.python.org/issue33316 
> showed, Tkinter is broken, for both Py2 and Py3, with both threaded and 
> non-threaded Tcl, since 2002 at least, and no-one gives a damn.

The second issue number doesn’t refer to a Tkinter issue, the former is about a 
month old and has reactions from a core developer. That’s not “nobody cares”. 

> 
> This seems to be a testament that very few people are actually interested in 
> or are using it.

Not necessarily, it primarily reflects that CPython is volunteer-driven 
project.  This appears to be related to the interaction of Tkinter and threads, 
and requires hacking on C code.  That seriously shrinks the pool of people that 
feel qualified to work on this.

> 
> If that's so, there's no use keeping it in the standard library -- if 
> anything, because there's not enough incentive and/or resources to support 
> it. And to avoid screwing people (=me) up when they have the foolishness to 
> think they can rely on it in their projects -- nowhere in the docs it is said 
> that the module is only partly functional.

Tkinter is used fairly often as an easily available GUI library and is not much 
as you imply. 

I don’t know how save calling GUI code from multiple threads is in general 
(separate from this Tkinter issue), but do know that this is definitely not 
save across platforms: at least on macOS calling GUI methods in Apple’s 
libraries from secondary threads is unsafe unless those methods are explicitly 
documented as thread-safe.

Ronald

> 
> -- 
> 
> Regards,
> Ivan
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-03 Thread Ronald Oussoren


> On 3 Jun 2018, at 12:03, Christian Tismer  wrote:
> 
> On 02.06.18 05:47, Nick Coghlan wrote:
>> On 2 June 2018 at 03:45, Jeroen Demeyer > > wrote:
>> 
>>On 2018-06-01 17:18, Nathaniel Smith wrote:
>> 
>>Unfortunately, very few people use the stable ABI currently, so it's
>>easy for things like this to get missed.
>> 
>> 
>>So there are no tests for the stable ABI in Python?
>> 
>> 
>> Unfortunately not.
>> 
>> https://bugs.python.org/issue21142 is an old issue suggesting automating
>> those checks (so we don't inadvertently add or remove symbols for
>> previously published stable ABI definitions), but it's not yet made it
>> to the state of being sufficiently well automated that it can be a
>> release checklist item in PEP 101.
>> 
>> Cheers,
>> Nick.
> 
> Actually, I think we don't need such a test any more, or we
> could use this one as a heuristic test:
> 
> I have written a script that scans all relevant header files
> and analyses all sections which are reachable in the limited API
> context.
> All macros that don't begin with an underscore which contain
> a "->tp_" string are the locations which will break.
> 
> I found exactly 7 locations where this is the case.
> 
> My PR will contain the 7 fixes plus the analysis script
> to go into tools. Preparind that in the evening.

Having tests would still be nice to detect changes to the stable ABI when they 
are made. 

Writing those tests is quite some work though, especially if those at least 
smoke test the limited ABI by compiling snippets the use all symbols that 
should be exposed by the limited ABI. Writing those tests should be fairly 
simple for someone that knows how to write C extensions, but is some work.

Writing a tests that complain when the headers expose symbols that shouldn’t be 
exposed is harder, due to the need to parse headers (either by hacking 
something together using regular expressions, or by using tools like gccxml or 
clang’s C API).  

BTW. The problem with the tool in issue 21142 is that this looks at the symbols 
exposed by lib python on linux, and that exposed more symbols than just the 
limited ABI. 
 
Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-04 Thread Ronald Oussoren


> On 4 Jun 2018, at 08:35, Ronald Oussoren  wrote:
> 
> 
> 
>> On 3 Jun 2018, at 17:04, Eric V. Smith > <mailto:e...@trueblade.com>> wrote:
>> 
>> On 6/3/2018 10:55 AM, Christian Tismer wrote:
>>> On 03.06.18 13:18, Ronald Oussoren wrote:
>>>> 
>>>> 
>>>>> On 3 Jun 2018, at 12:03, Christian Tismer >>>> <mailto:tis...@stackless.com>> wrote:
>>> ...
>>>>> 
>>>>> I have written a script that scans all relevant header files
>>>>> and analyses all sections which are reachable in the limited API
>>>>> context.
>>>>> All macros that don't begin with an underscore which contain
>>>>> a "->tp_" string are the locations which will break.
>>>>> 
>>>>> I found exactly 7 locations where this is the case.
>>>>> 
>>>>> My PR will contain the 7 fixes plus the analysis script
>>>>> to go into tools. Preparind that in the evening.
>>>> 
>>>> Having tests would still be nice to detect changes to the stable ABI when 
>>>> they are made.
>>>> 
>>>> Writing those tests is quite some work though, especially if those at 
>>>> least smoke test the limited ABI by compiling snippets the use all symbols 
>>>> that should be exposed by the limited ABI. Writing those tests should be 
>>>> fairly simple for someone that knows how to write C extensions, but is 
>>>> some work.
>>>> 
>>>> Writing a tests that complain when the headers expose symbols that 
>>>> shouldn’t be exposed is harder, due to the need to parse headers (either 
>>>> by hacking something together using regular expressions, or by using tools 
>>>> like gccxml or clang’s C API).
>>> What do you mean?
>>> My script does that with all "tp_*" type fields.
>>> What else would you want to check?
>> 
>> I think Ronald is saying we're trying to answer a few questions:
>> 
>> 1. Did we accidentally drop anything from the stable ABI?
>> 
>> 2. Did we add anything to the stable ABI that we didn't mean to?
>> 
>> 3. (and one of mine): Does the stable ABI already contain things that we 
>> don't expect it to?
> 
> That’s correct.  There have been instances of the second item over the year, 
> and not all of them have been caught before releases.  What doesn’t help for 
> all of these is that the stable ABI documentation says that every documented 
> symbol is part of the stable ABI unless there’s explicit documentation to the 
> contrary. This makes researching if functions are intended to be part of the 
> stable ABI harder.
> 
> And also:
> 
> 4. Does the stable ABI actually work?
> 
> Christian’s script finds cases where exposed names don’t actually work when 
> you try to use them.

To reply to myself, the gist below is a very crude version of what I was trying 
to suggest:

https://gist.github.com/ronaldoussoren/fe4f80351a7ee72c245025df7b2ef1ed#file-gistfile1-txt
 
<https://gist.github.com/ronaldoussoren/fe4f80351a7ee72c245025df7b2ef1ed#file-gistfile1-txt>

The gist is far from usable, but shows some tests that check that symbols in 
the stable ABI can be used, and tests that everything exported in the stable 
ABI is actually tested. 

Again, the code in the gist is a crude hack and I have currently no plans to 
turn this into something that could be added to the testsuite.

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Stable ABI

2018-06-04 Thread Ronald Oussoren


> On 3 Jun 2018, at 17:04, Eric V. Smith  wrote:
> 
> On 6/3/2018 10:55 AM, Christian Tismer wrote:
>> On 03.06.18 13:18, Ronald Oussoren wrote:
>>> 
>>> 
>>>> On 3 Jun 2018, at 12:03, Christian Tismer  wrote:
>> ...
>>>> 
>>>> I have written a script that scans all relevant header files
>>>> and analyses all sections which are reachable in the limited API
>>>> context.
>>>> All macros that don't begin with an underscore which contain
>>>> a "->tp_" string are the locations which will break.
>>>> 
>>>> I found exactly 7 locations where this is the case.
>>>> 
>>>> My PR will contain the 7 fixes plus the analysis script
>>>> to go into tools. Preparind that in the evening.
>>> 
>>> Having tests would still be nice to detect changes to the stable ABI when 
>>> they are made.
>>> 
>>> Writing those tests is quite some work though, especially if those at least 
>>> smoke test the limited ABI by compiling snippets the use all symbols that 
>>> should be exposed by the limited ABI. Writing those tests should be fairly 
>>> simple for someone that knows how to write C extensions, but is some work.
>>> 
>>> Writing a tests that complain when the headers expose symbols that 
>>> shouldn’t be exposed is harder, due to the need to parse headers (either by 
>>> hacking something together using regular expressions, or by using tools 
>>> like gccxml or clang’s C API).
>> What do you mean?
>> My script does that with all "tp_*" type fields.
>> What else would you want to check?
> 
> I think Ronald is saying we're trying to answer a few questions:
> 
> 1. Did we accidentally drop anything from the stable ABI?
> 
> 2. Did we add anything to the stable ABI that we didn't mean to?
> 
> 3. (and one of mine): Does the stable ABI already contain things that we 
> don't expect it to?

That’s correct.  There have been instances of the second item over the year, 
and not all of them have been caught before releases.  What doesn’t help for 
all of these is that the stable ABI documentation says that every documented 
symbol is part of the stable ABI unless there’s explicit documentation to the 
contrary. This makes researching if functions are intended to be part of the 
stable ABI harder.

And also:

4. Does the stable ABI actually work?

Christian’s script finds cases where exposed names don’t actually work when you 
try to use them.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Idea: reduce GC threshold in development mode (-X dev)

2018-06-08 Thread Ronald Oussoren


> On 8 Jun 2018, at 12:36, Serhiy Storchaka  wrote:
> 
> 08.06.18 11:31, Victor Stinner пише:
>> Do you suggest to trigger a fake "GC collection" which would just
>> visit all objects with a no-op visit callback? I like the idea!
>> 
>> Yeah, that would help to detect objects in an inconsistent state and
>> reuse the existing implemented visit methods of all types.
>> 
>> Would you be interested to try to implement this new debug feature?
> 
> It is simple:
> 
> #ifdef Py_DEBUG
> void
> _PyGC_CheckConsistency(void)
> {
> int i;
> if (_PyRuntime.gc.collecting) {
> return;
> }
> _PyRuntime.gc.collecting = 1;
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> update_refs(GEN_HEAD(i));
> }
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> subtract_refs(GEN_HEAD(i));
> }
> for (i = 0; i < NUM_GENERATIONS; ++i) {
> revive_garbage(GEN_HEAD(i));
> }
> _PyRuntime.gc.collecting = 0;
> }
> #endif

Wouldn’t it be enough to visit just the the newly tracked object in 
PyObject_GC_Track with a visitor function that does something minimal to verify 
that the object value is sane, for example by checking 
PyType_Ready(Py_TYPE(op)).

That would find issues where objects are tracked before they are initialised 
far enough to be save to visit, without changing GC behavior. I have no idea 
what the performance impact of this is though.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some data points for the "annual release cadence" concept

2018-06-13 Thread Ronald Oussoren


> On 13 Jun 2018, at 15:42, Nick Coghlan  wrote:
> 
> On 13 June 2018 at 02:23, Guido van Rossum  > wrote:
> So, to summarize, we need something like six for C?
> 
> Yeah, pretty much - once we can get to the point where it's routine for folks 
> to be building "abiX" or "abiXY" wheels (with the latter not actually being a 
> defined compatibility tag yet, but having the meaning of "targets the stable 
> ABI as first defined in CPython X.Y"), rather than feature release specific 
> "cpXYm" ones, then a *lot* of the extension module maintenance pain otherwise 
> arising from more frequent CPython releases should be avoided.
> 
> There'd still be a lot of other details to work out to turn the proposed 
> release cadence change into a practical reality, but this is the key piece 
> that I think is a primarily technical hurdle: simplifying the current 
> "wheel-per-python-version-per-target-platform" community project build 
> matrices to instead be "wheel-per-target-platform”.

This requires getting people to mostly stop using the non-stable ABI, and that 
could be a lot of work for projects that have existing C extensions that don’t 
use the stable ABI or cython/cffi/… 

That said, the CPython API tends to be fairly stable over releases and even 
without using the stable ABI supporting faster CPython feature releases 
shouldn’t be too onerous, especially for projects with some kind of automation 
for creating release artefacts (such as a CI system).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-06-17 Thread Ronald Oussoren



> On 17 Jun 2018, at 11:00, Jeroen Demeyer  wrote:
> 
> Hello,
> 
> I have been working on a slightly different PEP to use a new type slot 
> tp_ccalloffset instead the base_function base class. You can see the work in 
> progress here:
> 
> https://github.com/jdemeyer/PEP-ccall
> 
> By creating a new protocol that each class can implement, there is a full 
> decoupling between the features of a class and between the class hierarchy 
> (such coupling was complained about during the PEP 575 discussion). So I got 
> convinced that this is a better approach.
> 
> It also has the advantage that changes can be made more gradually: this PEP 
> changes nothing at all on the Python side, it only changes the CPython 
> implementation. I still think that it would be a good idea to refactor the 
> class hierarchy, but that's now an independent issue.
> 
> Another advantage is that it's more general and easier for existing classes 
> to use the protocol (PEP 575 on the other hand requires subclassing from 
> base_function which may not be compatible with an existing class hierarchy).

This looks interesting. Why did you add a tp_ccalloffset slot to the type with 
the actual information in instances instead of storing the information in a 
slot? 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Some data points for the "annual release cadence" concept

2018-06-17 Thread Ronald Oussoren


> On 15 Jun 2018, at 13:00, Nick Coghlan  wrote:
> 
> On 14 June 2018 at 06:30, Ronald Oussoren  <mailto:ronaldousso...@mac.com>> wrote:
>> On 13 Jun 2018, at 15:42, Nick Coghlan > <mailto:ncogh...@gmail.com>> wrote:
>> 
>> Yeah, pretty much - once we can get to the point where it's routine for 
>> folks to be building "abiX" or "abiXY" wheels (with the latter not actually 
>> being a defined compatibility tag yet, but having the meaning of "targets 
>> the stable ABI as first defined in CPython X.Y"), rather than feature 
>> release specific "cpXYm" ones, then a *lot* of the extension module 
>> maintenance pain otherwise arising from more frequent CPython releases 
>> should be avoided.
>> 
>> There'd still be a lot of other details to work out to turn the proposed 
>> release cadence change into a practical reality, but this is the key piece 
>> that I think is a primarily technical hurdle: simplifying the current 
>> "wheel-per-python-version-per-target-platform" community project build 
>> matrices to instead be "wheel-per-target-platform”.
> 
> This requires getting people to mostly stop using the non-stable ABI, and 
> that could be a lot of work for projects that have existing C extensions that 
> don’t use the stable ABI or cython/cffi/… 
> 
> That said, the CPython API tends to be fairly stable over releases and even 
> without using the stable ABI supporting faster CPython feature releases 
> shouldn’t be too onerous, especially for projects with some kind of 
> automation for creating release artefacts (such as a CI system).
> 
> Right, there would still be a non-zero impact on projects that ship binary 
> artifacts.
> 
> Having a viable stable ABI as a target just allows third party projects to 
> make the trade-off between the upfront cost of migrating to the stable ABI 
> (but then only needing to rebuild binaries when their own code changes), and 
> the ongoing cost of maintaining an extra few sets of binary wheel archives. I 
> think asking folks to make that trade-off on a case by case basis is 
> reasonable, whereas back in the previous discussion I considered *only* 
> offering the second option to be unreasonable.

I agree.  I haven’t seriously looked at the stable ABI yet, so I don’t know if 
there are reasons for now migrating to it beyond Py2 support and the effort 
required.  For my own projects (both public and not) I have some that could 
possibly migratie to the stable ABI, and some that cannot because they access 
information that isn’t public in the stable ABI. 

I generally still use the non-stable C API when I write extensions, basically 
because I already know how to do so. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 575 (Unifying function/method classes) update

2018-06-17 Thread Ronald Oussoren


> On 17 Jun 2018, at 16:31, Stefan Behnel  wrote:
> 
> Ronald Oussoren schrieb am 17.06.2018 um 14:50:
>> Why did you add a tp_ccalloffset slot to the type with the actual 
>> information in instances instead of storing the information in a slot? 
> 
> If the configuration of the callable was in the type, you would need a
> separate type for each kind of callable. That would quickly explode. Think
> of this as a generalised PyCFunction interface to arbitrary callables.
> There is a function pointer and some meta data, and both are specific to an
> instance.

That’s true for PyCFunction, but not necessarily as a general replacement for 
the tp_call slot.  I my code I’d basically use the same function pointer and 
metadata for all instances (that is, more like PyFunction than PyCFunction). 

> 
> Also, there are usually only a limited number of callables around, so
> memory doesn't matter. (And memory usage would be a striking reason to have
> something in a type rather than an instance.)

I was mostly surprised that something that seems to be a replacement for 
tp_call stores the interesting information in instances instead of the type 
itself. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Computed Goto dispatch for Python 2

2015-05-29 Thread Ronald Oussoren


Op 28 mei 2015 om 21:37 heeft Chris Barker  het volgende 
geschreven:

> On Thu, May 28, 2015 at 12:25 PM, Sturla Molden  
> wrote:
> 
>> The system
>> Python should be left alone as it is.
> 
> absolutely!
> 
> By the way, py2app will build an application bundle that depends on the 
> system python, indeed, that's all it will do if you run it with the system 
> python, as Apple has added some non-redistributable bits in there.

That's not quite the reason. It's more that I don't want to guess whether or 
not it is valid to bundle binaries from a system location.  Furthermore 
bundling files from a base install of the OS is pretty useless, especially when 
those binaries won't run on earlier releases anyway due to the compilation 
options used. 

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-22 Thread Ronald Oussoren
Hi,

Another summer with another EuroPython, which means its time again to try to 
revive PEP 447…

I’ve just pushes a minor update to the PEP and would like to get some feedback 
on this, arguably fairly esoteric, PEP.

The PEP proposes to to replace direct access to the class __dict__ in 
object.__getattribute__ and super.__getattribute__ by calls to a new special 
method to give the metaclass more control over attribute lookup, especially for 
access using a super() object.  This is needed for classes that cannot store 
(all) descriptors in the class dict for some reason, see the PEP text for a 
real-world example of that.

Regards,

  Ronald


The PEP text (with an outdated section with benchmarks removed):

PEP: 447
Title: Add __getdescriptor__ method to metaclass
Version: $Revision$
Last-Modified: $Date$
Author: Ronald Oussoren 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Jun-2013
Post-History: 2-Jul-2013, 15-Jul-2013, 29-Jul-2013, 22-Jul-2015


Abstract


Currently ``object.__getattribute__`` and ``super.__getattribute__`` peek
in the ``__dict__`` of classes on the MRO for a class when looking for
an attribute. This PEP adds an optional ``__getdescriptor__`` method to
a metaclass that replaces this behavior and gives more control over attribute
lookup, especially when using a `super`_ object.

That is, the MRO walking loop in ``_PyType_Lookup`` and
``super.__getattribute__`` gets changed from::

 def lookup(mro_list, name):
 for cls in mro_list:
 if name in cls.__dict__:
 return cls.__dict__

 return NotFound

to::

 def lookup(mro_list, name):
 for cls in mro_list:
 try:
 return cls.__getdescriptor__(name)
 except AttributeError:
 pass

 return NotFound

The default implemention of ``__getdescriptor__`` looks in the class
dictionary::

   class type:
  def __getdescriptor__(cls, name):
  try:
  return cls.__dict__[name]
  except KeyError:
  raise AttributeError(name) from None

Rationale
=

It is currently not possible to influence how the `super class`_ looks
up attributes (that is, ``super.__getattribute__`` unconditionally
peeks in the class ``__dict__``), and that can be problematic for
dynamic classes that can grow new methods on demand.

The ``__getdescriptor__`` method makes it possible to dynamically add
attributes even when looking them up using the `super class`_.

The new method affects ``object.__getattribute__`` (and
`PyObject_GenericGetAttr`_) as well for consistency and to have a single
place to implement dynamic attribute resolution for classes.

Background
--

The current behavior of ``super.__getattribute__`` causes problems for
classes that are dynamic proxies for other (non-Python) classes or types,
an example of which is `PyObjC`_. PyObjC creates a Python class for every
class in the Objective-C runtime, and looks up methods in the Objective-C
runtime when they are used. This works fine for normal access, but doesn't
work for access with `super`_ objects. Because of this PyObjC currently
includes a custom `super`_ that must be used with its classes, as well as
completely reimplementing `PyObject_GenericGetAttr`_ for normal attribute
access.

The API in this PEP makes it possible to remove the custom `super`_ and
simplifies the implementation because the custom lookup behavior can be
added in a central location.

.. note::

   `PyObjC`_ cannot precalculate the contents of the class ``__dict__``
   because Objective-C classes can grow new methods at runtime. Furthermore
   Objective-C classes tend to contain a lot of methods while most Python
   code will only use a small subset of them, this makes precalculating
   unnecessarily expensive.


The superclass attribute lookup hook


Both ``super.__getattribute__`` and ``object.__getattribute__`` (or
`PyObject_GenericGetAttr`_ and in particular ``_PyType_Lookup`` in C code)
walk an object's MRO and currently peek in the class' ``__dict__`` to look up
attributes.

With this proposal both lookup methods no longer peek in the class ``__dict__``
but call the special method ``__getdescriptor__``, which is a slot defined
on the metaclass. The default implementation of that method looks
up the name the class ``__dict__``, which means that attribute lookup is
unchanged unless a metatype actually defines the new special method.

Aside: Attribute resolution algorithm in Python
---

The attribute resolution proces as implemented by ``object.__getattribute__``
(or PyObject_GenericGetAttr`` in CPython's implementation) is fairly
straightforward, but not entirely so without reading C code.

The current CPython implementation of object.__getattribute__ is basicly
equivalent to the following (pseudo-) Python code (excluding som

Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-22 Thread Ronald Oussoren

> On 22 Jul 2015, at 18:02, Terry Reedy  wrote:
> 
> On 7/22/2015 3:25 AM, Ronald Oussoren wrote:
>> Hi,
>> 
>> Another summer with another EuroPython, which means its time again to
>> try to revive PEP 447…
>> 
>> I’ve just pushes a minor update to the PEP and would like to get some
>> feedback on this, arguably fairly esoteric, PEP.
> 
> Yeh, a bit too esoteric for most of us to review.  

I noticed that in my previous attempts as well. There is only a limited number 
of people the really grok how Python’s attribute lookup works, and a smaller 
subset of those understand how that’s implemented in CPython.

> For instance, it is not obvious to me, not familiar with internal details, 
> after reading the intro, why a custom __getattribute__ is not enough and why 
> __getdescriptor__ would be needed.

That means the PEP text needs some more work. Using __getattribute__ works for 
normal attribute access, but not when you look for a superclass implementation 
using super() because super currently *only* looks in the __dict__ of classes 
further along the MRO and offers no way to influence the search. That’s a 
problem when classes can grow methods dynamically.

> If Guido does not want to review this, you need to find a PEP BDFL for this.

I’ll see if I can corner him at EP :-).  Its surprisingly hard to find people 
at conferences.

> 
> There are two fairly obvious non-esoteric questions:
> 
> 1. How does this impact speed (updated section needed)?

The speed impact should be minimal, the initial version of the patch (which 
needs some updating which I’ll try to do during the EP sprints) uses shortcuts 
to avoid actually calling the __getdescriptor__ method in the usual case.

> 
> 2. Is this useful, that you can think of, for anything other than connecting 
> to Objective C?

Not immediately.  But then again, I initially thought that decorators would 
have limited appeal as well :-).  I guess this could be useful for other 
proxy-like objects as well, especially when preloading the __dict__ is 
relatively expensive.

Apart from direct usefulness this closes a hole in the way you can influence 
attribute lookup.

Ronald


> 
> -- 
> Terry Jan Reedy
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-22 Thread Ronald Oussoren

> On 22 Jul 2015, at 19:12, Steve Dower  wrote:
> 
> Terry Reedy wrote:
>> On 7/22/2015 3:25 AM, Ronald Oussoren wrote:
>>> Hi,
>>> 
>>> Another summer with another EuroPython, which means its time again to
>>> try to revive PEP 447…
>>> 
>>> I’ve just pushes a minor update to the PEP and would like to get some
>>> feedback on this, arguably fairly esoteric, PEP.
>> 
>> Yeh, a bit too esoteric for most of us to review. For instance, it is not
>> obvious to me, not familiar with internal details, after reading the intro, 
>> why
>> a custom __getattribute__ is not enough and why __getdescriptor__ would be
>> needed. If Guido does not want to review this, you need to find a PEP BDFL 
>> for
>> this.
>> 
>> There are two fairly obvious non-esoteric questions:
>> 
>> 1. How does this impact speed (updated section needed)?
> 
> Agreed, this is important. But hopefully it's just a C indirection (or better 
> yet, a null check) for objects that don't override __getdescriptor__.

Not a null check, but a check for a specific function pointer. That way you can 
be sure that super classes always have the slot which IMHO gives a nicer user 
experience.

> 
>> 2. Is this useful, that you can think of, for anything other than connecting 
>> to
>> Objective C?
> 
> There are other object models that would benefit from this, but I don't 
> recall that we came up with uses other than "helps proxy to objects where 
> listing all members eagerly is expensive and/or potentially incorrect". Maybe 
> once you list all the operating systems that are now using dynamic 
> object-oriented APIs rather than flat APIs (Windows, iOS, Android, ... 
> others?) this is good enough?
> 
> FWIW, I'm still +1 on this, pending performance testing.

The PEP on the website contains performance test data, but that’s out of data. 
I don’t think the implementation of attribute lookup has changed enough to 
really invalidate those test results, but I will rerun the tests once I’ve 
updated the patch because hunches don’t count when evaluating performance.

Ronald

> 
> Cheers,
> Steve
> 
>> --
>> Terry Jan Reedy
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org <mailto:Python-Dev@python.org>
> https://mail.python.org/mailman/listinfo/python-dev 
> <https://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com 
> <https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-23 Thread Ronald Oussoren

> On 23 Jul 2015, at 11:29, Dima Tisnek  wrote:
> 
> Hey I've taken time to read the PEP, my 2c... actually 1c:
> 
> Most important is to explain how this changes behaviour of Python programs.
> 
> A comprehensive set of Python examples where behaviour is changed (for
> better or worse) please.

The behaviour of existing python code is not changed at all.  Code that 
directly looks in the class __dict__ might be impacted, but only when running 
into classes with a custom __getdescriptor__ method. I’ve listed the code in 
the stdlib that could be affected, but have to do a new pass of the stdlib to 
check if anything relevant changed since I wrote the section.   I general you 
run into the same issues when adding a custom __getattribute__ or __getattr__ 
method, both of which will confuse introspection tools that assume regular 
attribute lookup semantics.

> 
> While I understand the concern of "superclasses of objects that gain
> or lose attributes at runtime" on the theoretical level, please
> translate that into actual Python examples.

The primary use case I have for this are classes that are proxies for external 
systems. There may be other uses as well, but I don’t have examples of that 
(other than the contrived example in the PEP).

The reason I wrote the PEP in the first place is PyObjC: this project defines a 
proxy later between Python and Objective-C, with the goal to making it possible 
to write programs for Mac OS X in Python while being able to make full use of 
Apple’s high-level APIs.  The proxy layer is mostly completely dynamic: proxies 
for Objective-C classes and their methods are created at runtime by inspecting 
the Objective-C runtime with optionally extra annotations (provided by the 
project) for stuff that cannot be extracted from the runtime. 

That is, at runtime PyObjC creates a Python class “NSObject” that corresponds 
to the Objective-C class “NSObject” as defined by Apple. Every method of the 
Objective-C class is make available as a method on the Python proxy class as 
well.

It is not possible to 100% reliably set up the Python proxy class for 
“NSObject” with all methods because Objective-C classes can grow new methods at 
runtime, and the introspection API that Apple provides does not have a way to 
detect this other than by polling.  Older versions of PyObjC did poll, but even 
that was not relialble enough and left a race condition:

 def someAction_(self, sender):
   self.someMethod()
   self.button.setTitle_(“click me”)
   super().someOtherMethod()

The call to “someMethod” used to poll the Objective-C runtime for changes.  The 
call through super() of someOtherMethod() does not do so because of the current 
semantics of super (which PEP 447 tries to change). That’s a problem because 
“self.button.setTitle_” might load a bundle that adds “someOtherMethod” to one 
of our super classes. That sadly enough is not a theoretical concern, I’ve seen 
something like this in the past.

Because of this PyObjC contains its own version of builtins.super which must be 
used with it (and is fully compatible with builtin.super for other classes).

Recent versions of PyObjC no longer poll, primarily because polling is costly 
and because Objective-C classes tend to have fat APIs most of which is never 
used by any one program.

What bothers me with PyObjC’s current approach is one the one hand that a 
custom super is inherently incompatible with any other library that might have 
a simular need, and on there other hand that I have to reimplement all logic in 
both object.__getattribute__ and super.__getattribute__ to be able to have a 
customisation of one small aspect of attribute lookup.

Ronald

> 
> d.
> 
> On 22 July 2015 at 09:25, Ronald Oussoren  wrote:
>> Hi,
>> 
>> Another summer with another EuroPython, which means its time again to try to
>> revive PEP 447…
>> 
>> I’ve just pushes a minor update to the PEP and would like to get some
>> feedback on this, arguably fairly esoteric, PEP.
>> 
>> The PEP proposes to to replace direct access to the class __dict__ in
>> object.__getattribute__ and super.__getattribute__ by calls to a new special
>> method to give the metaclass more control over attribute lookup, especially
>> for access using a super() object.  This is needed for classes that cannot
>> store (all) descriptors in the class dict for some reason, see the PEP text
>> for a real-world example of that.
>> 
>> Regards,
>> 
>>  Ronald
>> 
>> 
>> The PEP text (with an outdated section with benchmarks removed):
>> 
>> PEP: 447
>> Title: Add __getdescriptor__ method to metaclass
>> Version: $Revision$
>> Last-Modified: $Date$
>> Author: Ronald Oussoren 
>> Status: Draft
>> Type: Standards Track
>> Conten

Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-24 Thread Ronald Oussoren

> On 24 Jul 2015, at 16:17, Nick Coghlan  wrote:
> 
> On 23 July 2015 at 03:12, Steve Dower  <mailto:steve.do...@microsoft.com>> wrote:
>> Terry Reedy wrote:
>>> On 7/22/2015 3:25 AM, Ronald Oussoren wrote:
>>>> Hi,
>>>> 
>>>> Another summer with another EuroPython, which means its time again to
>>>> try to revive PEP 447…
>>>> 
>>>> I’ve just pushes a minor update to the PEP and would like to get some
>>>> feedback on this, arguably fairly esoteric, PEP.
>>> 
>>> Yeh, a bit too esoteric for most of us to review. For instance, it is not
>>> obvious to me, not familiar with internal details, after reading the intro, 
>>> why
>>> a custom __getattribute__ is not enough and why __getdescriptor__ would be
>>> needed. If Guido does not want to review this, you need to find a PEP BDFL 
>>> for
>>> this.
>>> 
>>> There are two fairly obvious non-esoteric questions:
>>> 
>>> 1. How does this impact speed (updated section needed)?
>> 
>> Agreed, this is important. But hopefully it's just a C indirection (or 
>> better yet, a null check) for objects that don't override __getdescriptor__.
>> 
>>> 2. Is this useful, that you can think of, for anything other than 
>>> connecting to
>>> Objective C?
>> 
>> There are other object models that would benefit from this, but I don't 
>> recall that we came up with uses other than "helps proxy to objects where 
>> listing all members eagerly is expensive and/or potentially incorrect". 
>> Maybe once you list all the operating systems that are now using dynamic 
>> object-oriented APIs rather than flat APIs (Windows, iOS, Android, ... 
>> others?) this is good enough?
> 
> "better bridging to other languages and runtimes" is a good enough
> rationale for me, although I also wonder if it might be useful for
> making some interesting COM and dbus based API wrappers.
> 
> Ronald, could you dig up a reference to the last thread (or threads)
> on this? My recollection is that we were actually pretty happy with
> it, and it was just set aside through lack of time to push it through
> to completion.

I’ll do some digging in my archives. From what I recall you and Steve were 
positive the last time around and others didn’t have much to add at the time.

FWIW Guido was positive about the idea, but would really like to see up to date 
benchmark results and some specific micro benchmarking to see if the change has 
negative performance impact.

I do have a API design question now that I’m working on this again: the PEP 
proposed to add a __getdescriptor__ method to the meta type, that is you’d 
define it as:

   class MyMeta (type):
def __getdescriptor__(self, name): …

   class MyType (object, metaclass=MyMeta):
   pass

This doesn’t match how other special slots are done, in particular __new__. I’d 
like to switch the definition to:


   class MyType:

   @classmethod
   def __getdescriptor__(cls, name): …

I have two questions about that: (1) is this indeed a better interface and (2) 
should users explicitly use the classmethod decorator or would it be better to 
match the behaviour for __new__ by leaving that out? Personally I do think 
that this is a better interface, but am not sure about requiring the decorator.

Ronald

P.S. Fighting with refcounting between sessions, forward porting of the patch 
for this PEP seems to have introduced a refcount problem. Nothing that cannot 
be fixed during the sprints though.


> 
> Regards,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com <mailto:ncogh...@gmail.com>   |   
> Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org <mailto:Python-Dev@python.org>
> https://mail.python.org/mailman/listinfo/python-dev 
> <https://mail.python.org/mailman/listinfo/python-dev>
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com 
> <https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-24 Thread Ronald Oussoren

> On 24 Jul 2015, at 17:29, Nick Coghlan  wrote:
> 
> On 25 July 2015 at 00:50, Brett Cannon  wrote:
>> Leave the decorator out like __new__, otherwise people are bound to forget
>> it and have a hard time debugging why their code doesn't work.
> 
> I'd actually advocate for keeping this as a metaclass method, rather
> than making it available to any type instance. The key thing to
> consider for me is "What additional power does making it a method on
> the class itself grant to mixin types?”

To be honest, I hadn’t considered mixin types yet. 

> 
> With PEP 487, the __init_subclass__ proposal only grants mixins the
> power to implicitly run additional code when new subclasses are
> defined. They have no additional ability to influence the behaviour of
> the specific class adding the mixin into the inheritance hierarchy.
> 
> With PEP 447, as currently written, a mixin that wants to alter how
> descriptors are looked up will be able to do so implicitly as long as
> there are no other custom metaclasses in the picture. As soon as there
> are *two* custom metaclasses involved, you'll get an error at
> definition time and have to sort out how you want the metaclass
> inheritance to work and have a chance to notice if there are two
> competing __getdescriptor__ implementations.
> 
> However, if __getdescriptor__ moves to being a class method on object
> rather than an instance method on type, then you'll lose that
> assistance from the metaclass checker - if you have two classes in
> your MRO with mutually incompatible __getdescriptor__ implementations,
> you're likely to be in for a world of pain as you try to figure out
> the source of any related bugs.

That’s a good point, and something that will move something that I’ve 
wanted to look into forward on my list: the difference between a
classmethod and a method on the class defined through a metaclass.

The semantics I’d like to have is that __getdescriptor__ is a local decision,
defining __getdescriptor__ for a class should only affect that class and its
subclass, and shouldn’t affect how superclasses are handled by __getattribute__.
That is something that can be done by defining __getdescriptor__ on a metaclass,
and AFAIK requires active cooperation when using a @classmethod.

It should be possible to demonstrate the differences in a pure Python
prototype. 

Ronald

> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-25 Thread Ronald Oussoren
I’ve pushed a minor update to the PEP to the repository. The benchmark results 
are still out of date, I need want to run those on an idle machine to get 
reliable results.

The PEP has one significant change w.r.t. the previous version: it now requires 
the use of a new type flag to enable the usage of the new slot in C code. This 
is due to concerns that loading old extensions might crash the interpreter 
otherwise.

References to earlier discussions (also added to the PEP):

* http://marc.info/?l=python-dev&m=137510220928964&w=2
* https://mail.python.org/pipermail/python-ideas/2014-July/028420.html
* https://mail.python.org/pipermail/python-dev/2013-July/127321.html

And finally, I’ve updated the implementation in issue 18181. The implementation 
passes the test suite with the current trunk and is good enough to play around 
with.  There is still an important issue though: I’ve done some micro 
benchmarking and those indicate that the method_cache mechanism in typeobject.c 
doesn’t work with my changes and that has a clear performance impact and must 
be fixed. That shouldn’t be too hard to fix, it’s probably just a botched check 
before the blocks of code that use and update the cache.

Ronald

> On 24 Jul 2015, at 19:55, Ronald Oussoren  wrote:
> 
>> 
>> On 24 Jul 2015, at 17:29, Nick Coghlan > <mailto:ncogh...@gmail.com>> wrote:
>> 
>> On 25 July 2015 at 00:50, Brett Cannon > <mailto:br...@python.org>> wrote:
>>> Leave the decorator out like __new__, otherwise people are bound to forget
>>> it and have a hard time debugging why their code doesn't work.
>> 
>> I'd actually advocate for keeping this as a metaclass method, rather
>> than making it available to any type instance. The key thing to
>> consider for me is "What additional power does making it a method on
>> the class itself grant to mixin types?”
> 
> To be honest, I hadn’t considered mixin types yet. 
> 
>> 
>> With PEP 487, the __init_subclass__ proposal only grants mixins the
>> power to implicitly run additional code when new subclasses are
>> defined. They have no additional ability to influence the behaviour of
>> the specific class adding the mixin into the inheritance hierarchy.
>> 
>> With PEP 447, as currently written, a mixin that wants to alter how
>> descriptors are looked up will be able to do so implicitly as long as
>> there are no other custom metaclasses in the picture. As soon as there
>> are *two* custom metaclasses involved, you'll get an error at
>> definition time and have to sort out how you want the metaclass
>> inheritance to work and have a chance to notice if there are two
>> competing __getdescriptor__ implementations.
>> 
>> However, if __getdescriptor__ moves to being a class method on object
>> rather than an instance method on type, then you'll lose that
>> assistance from the metaclass checker - if you have two classes in
>> your MRO with mutually incompatible __getdescriptor__ implementations,
>> you're likely to be in for a world of pain as you try to figure out
>> the source of any related bugs.
> 
> That’s a good point, and something that will move something that I’ve 
> wanted to look into forward on my list: the difference between a
> classmethod and a method on the class defined through a metaclass.
> 
> The semantics I’d like to have is that __getdescriptor__ is a local decision,
> defining __getdescriptor__ for a class should only affect that class and its
> subclass, and shouldn’t affect how superclasses are handled by 
> __getattribute__.
> That is something that can be done by defining __getdescriptor__ on a 
> metaclass,
> and AFAIK requires active cooperation when using a @classmethod.
> 
> It should be possible to demonstrate the differences in a pure Python
> prototype. 
> 
> Ronald
> 
>> 
>> Cheers,
>> Nick.
>> 
>> -- 
>> Nick Coghlan   |   ncogh...@gmail.com <mailto:ncogh...@gmail.com>   |   
>> Brisbane, Australia
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org <mailto:Python-Dev@python.org>
>> https://mail.python.org/mailman/listinfo/python-dev 
>> <https://mail.python.org/mailman/listinfo/python-dev>
>> Unsubscribe: 
>> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com 
>> <https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-26 Thread Ronald Oussoren

> On 25 Jul 2015, at 17:39, Mark Shannon  wrote:
> 
> Hi,
> 
> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
>> 
>> Another summer with another EuroPython, which means its time again to 
>> try to revive PEP 447…
>> 
> 
> IMO, there are two main issues with the PEP and implementation.
> 
> 1. The implementation as outlined in the PEP is infinitely recursive, since 
> the
> lookup of "__getdescriptor__" on type must necessarily call
> type.__getdescriptor__.
> The implementation (in C) special cases classes that inherit 
> "__getdescriptor__"
> from type. This special casing should be mentioned in the PEP.

Sure.  An alternative is to slightly change the the PEP: use __getdescriptor__ 
when
present and directly peek into __dict__ when it is not, and then remove the 
default
__getdescriptor__. 

The reason I didn’t do this in the PEP is that I prefer a programming model 
where
I can explicitly call the default behaviour. 

> 
> 2. The actual implementation in C does not account for the case where the 
> class
> of a metaclass implements __getdescriptor__ and that method returns a value 
> when
> called with "__getdescriptor__" as the argument.

Isn’t that the same problem as with all slots, even when using 
__getattribute__? That is,
a meta class that implements __getattribute__ to return implementations for 
(say)
__getitem__ won’t work because the interpreter won’t call __getattribute__ to 
get that
implementation unless it already knows that the attribute is present.  Class 
creation,
and __setattr__ on type will not only fill __dict__, but also set slots in the 
type structure
as appropriate.  The interpreter than uses those slots to determine if a 
special method
is present.

In code:

class Meta1 (type):
def __getitem__(self, key):
return "<{} {}>".format(self.__name__, key)

class Class1 (metaclass=Meta1):
pass



class Meta2 (type):
def __getattribute__(self, name):
if name == "__getitem__":
return lambda key: "<{} {}>".format(self.__name__, key)

return super().__getattribute__(name)

class Class2 (metaclass=Meta2):
pass

print(Class1.__getitem__("hello"))
print(Class1["hello"])

print(Class2.__getitem__("hello"))
print(Class2["hello"])

The last line causes an exception:

Traceback (most recent call last):
  File "demo-getattr.py", line 24, in 
print(Class2["hello"])
TypeError: 'Meta2' object is not subscriptable

I agree that this should be mentioned in the PEP as it can be confusing.

> 
> 
> 
> Why was "__getattribute_super__" rejected as an alternative? No reason is 
> given.
> 
> "__getattribute_super__" has none of the problems listed above.

Not really. I initially used __getattribute_super__ as the name, but IIRC with
the same  semantics.

> Making super(t, obj) delegate to t.__super__(obj) seems consistent with other
> builtin method/classes and doesn't add corner cases to the already complex
> implementation of PyType_Lookup().

A disadvantage of delegation is t.__super__ then reproduce the logic dealing
with the MRO, while my proposal allows the metaclass to just deal with lookup
in a specific class object. 

Implementation complexity is an issue, but it seems to be acceptable so far. 
The main
problem w.r.t. additional complexity is that PyType_Lookup can now fail
with an exception other than an implied AttributeError and that causes
changes elsewhere in the implementation.

BTW. The patch for that part is slightly uglier than it needs to be, I currently
test for PyErr_Occurred() instead of using return codes in a number of places
to minimise the number of lines changes to make code review easier.  That 
needs to be changed before the code would actually be committed.

Ronald

P.S. Are you at the EP sprints? I’ll be there until early in the afternoon.

> 
> Cheers,
> Mark
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-07-26 Thread Ronald Oussoren

> On 26 Jul 2015, at 09:14, Ronald Oussoren  wrote:
> 
> 
>> On 25 Jul 2015, at 17:39, Mark Shannon > <mailto:m...@hotpy.org>> wrote:
>> 
>> Hi,
>> 
>> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
>>> 
>>> Another summer with another EuroPython, which means its time again to 
>>> try to revive PEP 447…
>>> 
>> 
>> IMO, there are two main issues with the PEP and implementation.
>> 
>> 1. The implementation as outlined in the PEP is infinitely recursive, since 
>> the
>> lookup of "__getdescriptor__" on type must necessarily call
>> type.__getdescriptor__.
>> The implementation (in C) special cases classes that inherit 
>> "__getdescriptor__"
>> from type. This special casing should be mentioned in the PEP.
> 
> Sure.  An alternative is to slightly change the the PEP: use 
> __getdescriptor__ when
> present and directly peek into __dict__ when it is not, and then remove the 
> default
> __getdescriptor__. 
> 
> The reason I didn’t do this in the PEP is that I prefer a programming model 
> where
> I can explicitly call the default behaviour. 

I’m not sure there is a problem after all (but am willing to use the 
alternative I describe above),
although that might be because I’m too much focussed on CPython semantics.

The __getdescriptor__ method is a slot in the type object and because of that 
the
 normal attribute lookup mechanism is side-stepped for methods implemented in 
C. A
__getdescriptor__ that is implemented on Python is looked up the normal way by 
the 
C function that gets added to the type struct for such methods, but that’s not 
a problem for
type itself.

That’s not new for __getdescriptor__ but happens for most other special methods 
as well,
as I noted in my previous mail, and also happens for the __dict__ lookup that’s 
currently
used (t.__dict__ is an attribute and should be lookup up using 
__getattribute__, …)

Ronald___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-27 Thread Ronald Oussoren

> On 27 Jul 2015, at 04:04, Tim Peters  wrote:
> 
> 
>> As an example, consider an alarm clock. I want it to go off at 7am
>> each morning. I'd feel completely justified in writing tomorrows_alarm
>> = todays_alarm + timedelta(days=1).
>> 
>> If the time changes to DST overnight, I still want the alarm to go off
>> at 7am. Even though +1 day is in this case actually + 25 (or is it
>> 23?) hours. That's the current semantics.
> 
> There was a long list of use cases coming to the same conclusion.  The
> current arithmetic allows uniform patterns in local time to be coded
> in uniform, straightforward ways.  Indeed, in "the obvious" ways.  The
> alternative behavior favors uniform patterns in UTC, but who cares?
> ;-)  Few local clocks show UTC.  Trying to code uniform local-time
> behaviors using "aware arithmetic" (which is uniform in UTC. but may
> be "lumpy" in local time) can be a nightmare.
> 
> The canonical counterexample is a nuclear reactor that needs to be
> vented every 24 hours.  To which the canonical rejoinder is that the
> programmer in charge of that system is criminally incompetent if
> they're using _any_ notion of time other than UTC ;-)

IMHO “+ 1 days” and “+ 24 hours” are two different things.  Date 
arithmetic is full of messy things like that.  “+ 1 month” is another
example of that (which the datetime module punts completely
and can be a source of endless bikeshidding).

Ronald


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-27 Thread Ronald Oussoren


> On 27 jul. 2015, at 20:49, Tim Peters  wrote:
> 
> [Ronald Oussoren ]
>> IMHO “+ 1 days” and “+ 24 hours” are two different things.
>> Date arithmetic is full of messy things like that.
> 
> But it's a fact that they _are_ the same in naive time, which Python's
> datetime single-timezone arithmetic implements:
> 
> ...
> 
> Naive time is easy to understand, reason about, and work with.  When
> it comes to the real world, political adjustments to and within time
> zones can make the results dodgy, typically in the two DST-transition
> hours per year when most people living in a given time zone are
> sleeping.  How much complexity do you want to endure in case they wake
> up? ;-)  Guido's answer was "none in arithmetic - push all the
> complexity into conversions - then most uses can blissfully ignore the
> complexities".

I totally agree with that, having worked on applications that had to deal with 
time a lot and including some where the end of a day was at 4am the following 
day.  That app never had to deal with DST because not only are the transitions 
at night, the are also during the weekend. 


Treating time as UTC with conversions at the application edge might be 
"cleaner" in some sense, but can make code harder to read for application 
domain experts.

It might be nice to have time zone aware datetime objects with the right(TM) 
semantics, but those can and should not replace the naive objects we know and 
love. 

That said,  I have had the need for date delta objects that can deal with 
deltas expressed at days or months but it is easy enough to write your own 
library for that that can deal with the local conventions for those. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-28 Thread Ronald Oussoren

> On 28 Jul 2015, at 03:13, Tres Seaver  wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 07/27/2015 06:11 PM, Ronald Oussoren wrote:
> 
>> Treating time as UTC with conversions at the application edge might
>> be "cleaner" in some sense, but can make code harder to read for 
>> application domain experts.
>> 
>> It might be nice to have time zone aware datetime objects with the 
>> right(TM) semantics, but those can and should not replace the naive 
>> objects we know and love.
> 
> Interesting.  My experience is exactly the opposite:  the datetimes which
> "application domain experts" cared about *always* needed to be non-naive
> (zone captured explicitly or from the user's machine and converted to
> UTC/GMT for storage).  As with encoded bytes, allowing a naive instance
> inside the borders the system was always a time-bomb bug (stuff would
> blow up at a point far removed from which it was introduced).
> 
> The instances which could have safely been naive were all
> logging-related, where the zone was implied by the system's timezone
> (nearly always UTC).  I guess the difference is that I'm usually writing
> apps whose users can't be presumed to be in any one timezone.  Even in
> those cases, having the logged datetimes be incomparable to user-facing
> ones would make them less useful.

I usually write application used by local users where the timezone is completely
irrelevant, including DST.  Stuff needs to be done at (say) 8PM, ands that’s
8PM local time. Switching to and from UTC just adds complications. 

I’m lucky enough that most datetime calculations happen within one work week
and therefore never have to cross DST transitions.  For longer periods I usually
only care about dates, and almost never about the number of seconds between
two datetime instances.   That makes the naive datetime from the stdlib a 
very convenient programming model.

And I’m in a country that’s small enough to have only one timezone.

IMHO Unicode is different in that regard, there the application logic can 
clearly
be expressed as text and the encoding to/from bytes can safely be hidden in
the I/O layer. Often the users I deal with can follow the application logic 
w.r.t.
text handling, but have no idea about encodings (but do care about accented
characters). With some luck they can provide a sample file that allows me to 
deduce the encoding that should be used, and most applications are moving 
to UTF-8.

BTW. Note that I’m not saying that a timezone aware datetime is bad, just
that they are not always necessary.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status on PEP-431 Timezones

2015-07-29 Thread Ronald Oussoren

> On 28 Jul 2015, at 03:13, Tres Seaver  wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 07/27/2015 06:11 PM, Ronald Oussoren wrote:
> 
>> Treating time as UTC with conversions at the application edge might
>> be "cleaner" in some sense, but can make code harder to read for 
>> application domain experts.
>> 
>> It might be nice to have time zone aware datetime objects with the 
>> right(TM) semantics, but those can and should not replace the naive 
>> objects we know and love.
> 
> Interesting.  My experience is exactly the opposite:  the datetimes which
> "application domain experts" cared about *always* needed to be non-naive
> (zone captured explicitly or from the user's machine and converted to
> UTC/GMT for storage).  As with encoded bytes, allowing a naive instance
> inside the borders the system was always a time-bomb bug (stuff would
> blow up at a point far removed from which it was introduced).
> 
> The instances which could have safely been naive were all
> logging-related, where the zone was implied by the system's timezone
> (nearly always UTC).  I guess the difference is that I'm usually writing
> apps whose users can't be presumed to be in any one timezone.  Even in
> those cases, having the logged datetimes be incomparable to user-facing
> ones would make them less useful.

I usually write application used by local users where the timezone is completely
irrelevant, including DST.  Stuff needs to be done at (say) 8PM, ands that’s
8PM local time. Switching to and from UTC just adds complications. 

I’m lucky enough that most datetime calculations happen within one work week
and therefore never have to cross DST transitions.  For longer periods I usually
only care about dates, and almost never about the number of seconds between
two datetime instances.   That makes the naive datetime from the stdlib a 
very convenient programming model.

And I’m in a country that’s small enough to have only one timezone.

IMHO Unicode is different in that regard, there the application logic can 
clearly
be expressed as text and the encoding to/from bytes can safely be hidden in
the I/O layer. Often the users I deal with can follow the application logic 
w.r.t.
text handling, but have no idea about encodings (but do care about accented
characters). With some luck they can provide a sample file that allows me to 
deduce the encoding that should be used, and most applications are moving 
to UTF-8.

BTW. Note that I’m not saying that a timezone aware datetime is bad, just
that they are not always necessary.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447 (type.__getdescriptor__)

2015-08-05 Thread Ronald Oussoren

> On 26 Jul 2015, at 14:18, Mark Shannon  wrote:
> 
>> On 26 July 2015 at 10:41 Ronald Oussoren  wrote:
>> 
>> 
>> 
>>> On 26 Jul 2015, at 09:14, Ronald Oussoren  wrote:
>>> 
>>> 
>>>> On 25 Jul 2015, at 17:39, Mark Shannon >>> <mailto:m...@hotpy.org>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi,
>>>>> 
>>>>> Another summer with another EuroPython, which means its time again to 
>>>>> try to revive PEP 447…
>>>>> 
>>>> 
>>>> IMO, there are two main issues with the PEP and implementation.
>>>> 
>>>> 1. The implementation as outlined in the PEP is infinitely recursive, since
>>>> the
>>>> lookup of "__getdescriptor__" on type must necessarily call
>>>> type.__getdescriptor__.
>>>> The implementation (in C) special cases classes that inherit
>>>> "__getdescriptor__"
>>>> from type. This special casing should be mentioned in the PEP.
>>> 
>>> Sure.  An alternative is to slightly change the the PEP: use
>>> __getdescriptor__ when
>>> present and directly peek into __dict__ when it is not, and then remove the
>>> default
>>> __getdescriptor__. 
>>> 
>>> The reason I didn’t do this in the PEP is that I prefer a programming model
>>> where
>>> I can explicitly call the default behaviour. 
>> 
>> I’m not sure there is a problem after all (but am willing to use the
>> alternative I describe above),
>> although that might be because I’m too much focussed on CPython semantics.
>> 
>> The __getdescriptor__ method is a slot in the type object and because of that
>> the
>> normal attribute lookup mechanism is side-stepped for methods implemented in
>> C. A
>> __getdescriptor__ that is implemented on Python is looked up the normal way 
>> by
>> the 
>> C function that gets added to the type struct for such methods, but that’s 
>> not
>> a problem for
>> type itself.
>> 
>> That’s not new for __getdescriptor__ but happens for most other special
>> methods as well,
>> as I noted in my previous mail, and also happens for the __dict__ lookup
>> that’s currently
>> used (t.__dict__ is an attribute and should be lookup up using
>> __getattribute__, …)
> 
> 
> "__getdescriptor__" is fundamentally different from "__getattribute__" in that
> is defined in terms of itself.
> 
> object.__getattribute__ is defined in terms of type.__getattribute__, but
> type.__getattribute__ just does 
> dictionary lookups.

object.__getattribute__ is actually defined in terms of type.__dict__ and 
object.__dict__. Type.__getattribute__ is at best used to to find type.__dict__.

> However defining type.__getattribute__ in terms of
> __descriptor__ causes a circularity as
> __descriptor__ has to be looked up on a type.
> 
> So, not only must the cycle be broken by special casing "type", but that
> "__getdescriptor__" can be defined
> not only by a subclass, but also a metaclass that uses "__getdescriptor__" to
> define  "__getdescriptor__" on the class.
> (and so on for meta-meta classes, etc.)

Are the semantics of special methods backed by a member in PyTypeObject part of 
Python’s semantics, or are those CPython implementation details/warts? In 
particular that such methods are access directly without using __getattribute__ 
at all (or at least only indirectly when the method is implemented in Python).  
That is:

>>> class Dict (dict):
...def __getattribute__(self, nm):
...   print("Get", nm)
...   return dict.__getattribute__(self, nm)
... 
>>> 
>>> d = Dict(a=4)
>>> d.__getitem__('a')
Get __getitem__
4
>>> d['a']
4
>>> 

(And likewise for other special methods, which amongst others means that 
neither __getattribute__ nor __getdescriptor__ can be used to dynamicly add 
such methods to a class)

In my proposed patch I do special case “type”, but that’s only intended as a 
(for now unbenchmarked) speed hack.  The code would work just as well without 
the hack because the metatype’s  __getdescriptor__ is looked up directly in the 
PyTypeObject on the C level, without using __getattribute__ and hence without 
having to use recursion.

BTW. I wouldn’t mind dropping the default “type.__getdescriptor__” completely 
and reword my proposal to state that __getdescriptor__ is used when present, 
and otherwise __dict__ is accessed

Re: [Python-Dev] Verification of SSL cert and hostname made easy

2013-12-01 Thread Ronald Oussoren


> On 30 nov. 2013, at 19:29, Christian Heimes  wrote:
> 
> With CERT_REQUIRED OpenSSL verifies that the peer's certificate is
> directly or indirectly signed by a trusted root certification authority.
> With Python 3.4 the ssl module is able to use/load the system's trusted
> root certs on all major systems (Linux, Mac, BSD, Windows). On Linux and
> BSD it requires a properly configured system openssl to locate the root
> certs. This usually works out of the box. On Mac Apple's openssl build
> is able to use the keychain API of OSX. I have added code for Windows'
> system store.

Note that only Apple's build of OpenSSL integrates with keychain, other builds 
don't. The patch for keychain integration is on Apple's open source site but 
that isn't very helpful because that code uses a private API to do most of the 
work.   

This almost certainly means that users of fink, macports and the like cannot 
use the system keystore. 

It is probably possible to use the Keychain API to verify certificates, I 
haven't seriously looked into that yet and there is a risk of using higher 
level APIs: those tend to not like calling fork without calling execv soon 
after and that could break existing scripts. 

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.4: What to do about the Derby patches

2014-02-19 Thread Ronald Oussoren

On 20 Feb 2014, at 08:09, Nick Coghlan  wrote:

> On 20 February 2014 16:42, Ronald Oussoren  wrote:
>> 
>> On 17 Feb 2014, at 00:43, Nick Coghlan  wrote:
>> 
>>> 
>>> On 17 Feb 2014 08:36, "Greg Ewing"  wrote:
>>>> 
>>>> Larry Hastings wrote:
>>>> 
>>>>> 3) We hold off on merging the rest of the Derby patches until after 3.4.0 
>>>>> final ships, then we merge them into the 3.4 maintenance branch so they 
>>>>> go into 3.4.1.
>>>> 
>>>> 
>>>> But wouldn't that be introducing a new feature into a
>>>> maintenance release? (I.e. some functions that didn't
>>>> have introspectable signatures before would gain them.)
>>> 
>>> From a compatibility point of view, 3.4.0 will already force introspection 
>>> users and tool developers to cope with the fact that some, but not all, 
>>> builtin and extension types provide valid signature data. Additional clinic 
>>> conversions that don't alter semantics then just move additional callables 
>>> into the "supports programmatic introspection" category.
>>> 
>>> It's certainly in a grey area, but "What's in the best interest of end 
>>> users?" pushes me in the direction of counting clinic conversions that 
>>> don't change semantics as bug fixes - they get improved introspection 
>>> support sooner, and it shouldn't make life any harder for tool developers 
>>> because all of the adjustments for 3.4 will be to the associated functional 
>>> changes in the inspect module.
>>> 
>>> The key thing is to make sure to postpone any changes that impact 
>>> *semantics* (like adding keyword argument support).
>> 
>> But there is a semantic change: some functions without a signature in 3.4.0 
>> would have a signature in 3.4.1. That's unlikely to affect user code much 
>> because AFAIK signatures aren't used a lot yet, but it is a semantic change 
>> non the less :-)
> 
> Heh, you must have managed to miss all the Argument Clinic debates -
> "semantics" there is shorthand for "the semantics of the call" :)

I skipped most of that discussion, due to the sheer volume and limited time to 
add something meaningful to that discussion.

> 
> It turns out there are some current C signatures where we either need
> to slightly change the semantics of the API or else add new features
> to the inspect module before we can represent them properly at the
> Python layer. So, for the life of Python 3.4, those are off limits for
> conversion, and we'll sort them out as part of PEP 457 for 3.5.

That much I noticed, and IIRC it was noticed fairly early on in Argument 
Clinic’s history that there are C functions that have an API that cannot easily 
be represented in a pure Python function (other than using ‘*args, **kw’ and 
manually parsing the argument list). 

I totally agree that changing the signature of functions should wait for 3.5, 
but that’s the easy bit.

> 
> However, there are plenty of others where the signature *can* be
> adequately represented in 3.4, they just aren't (yet). So the approach
> Larry has taken is to declare that "could expose valid signature data,
> but doesn't" counts as a bug fix for Python 3.4 maintenance release
> purposes. We'll make sure the porting section of the What's New guide
> addresses that explicitly for the benefit of anyone porting
> introspection tools to Python 3.4 (having all of the argument
> introspection in the inspect module be based on inspect.signature and
> various enhancements to inspect.signature itself has significantly
> increased the number of callables that now support introspection).

I can live with that, but don’t really agree that exposing new signature data 
is a bug fix. But maybe I’m just too conservative :-)

To end on a positive not, I do like signature objects, and have added support 
for them to PyObjC to enrich its introspection capabilities.

Ronald

> 
> Cheers,
> Nick.
> 
> -- 
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.4: What to do about the Derby patches

2014-02-19 Thread Ronald Oussoren

On 17 Feb 2014, at 00:43, Nick Coghlan  wrote:

> 
> On 17 Feb 2014 08:36, "Greg Ewing"  wrote:
> >
> > Larry Hastings wrote:
> >
> >> 3) We hold off on merging the rest of the Derby patches until after 3.4.0 
> >> final ships, then we merge them into the 3.4 maintenance branch so they go 
> >> into 3.4.1.
> >
> >
> > But wouldn't that be introducing a new feature into a
> > maintenance release? (I.e. some functions that didn't
> > have introspectable signatures before would gain them.)
> 
> From a compatibility point of view, 3.4.0 will already force introspection 
> users and tool developers to cope with the fact that some, but not all, 
> builtin and extension types provide valid signature data. Additional clinic 
> conversions that don't alter semantics then just move additional callables 
> into the "supports programmatic introspection" category.
> 
> It's certainly in a grey area, but "What's in the best interest of end 
> users?" pushes me in the direction of counting clinic conversions that don't 
> change semantics as bug fixes - they get improved introspection support 
> sooner, and it shouldn't make life any harder for tool developers because all 
> of the adjustments for 3.4 will be to the associated functional changes in 
> the inspect module.
> 
> The key thing is to make sure to postpone any changes that impact *semantics* 
> (like adding keyword argument support).

But there is a semantic change: some functions without a signature in 3.4.0 
would have a signature in 3.4.1. That’s unlikely to affect user code much 
because AFAIK signatures aren’t used a lot yet, but it is a semantic change non 
the less :-)

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 463: Exception-catching expressions

2014-02-27 Thread Ronald Oussoren

On 21 Feb 2014, at 16:52, Chris Angelico  wrote:

> On Sat, Feb 22, 2014 at 1:34 AM, Brett Cannon  wrote:
>> While I like the general concept, I agree that it looks too much like a
>> crunched statement; the use of the colon is a non-starter for me. I'm sure
>> I'm not the only one whose brain has been trained to view a colon in Python
>> to mean "statement", period. This goes against that syntactic practice and
>> just doesn't work for me.
>> 
>> I'm -1 with the current syntax, but it can go into the + range if a better
>> syntax can be chosen.
> 
> We bikeshedded that extensively on -ideas. The four best options are:
> 
> value = (expr except Exception: default)
> value = (expr except Exception -> default)
> value = (expr except Exception pass default)
> value = (expr except Exception then default)
> 
> Note that the last option involves the creation of a new keyword.
> 
> Would any of the others feel better to you?

What about (also mentioned in the PEP)?

  value = (expr except Exception try default)

This seems to read nicely, although “try” is at a completely different position 
than it is in the equivalent try statement. 

I like the general idea, but like Brett I don’t like using a colon here at all.

Ronald

P.S. Sorry if this way already brought up, I’ve browsed through most of the 
threads on this on -ideas and -dev, but haven’t read all messages.
> 
> ChrisA
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 463: Exception-catching expressions

2014-02-27 Thread Ronald Oussoren

On 27 Feb 2014, at 11:09, Chris Angelico  wrote:

> On Thu, Feb 27, 2014 at 7:44 PM, Ronald Oussoren  
> wrote:
>> What about (also mentioned in the PEP)?
>> 
>>  value = (expr except Exception try default)
>> 
>> This seems to read nicely, although “try” is at a completely different 
>> position than it is in the equivalent try statement.
>> 
>> I like the general idea, but like Brett I don’t like using a colon here at 
>> all.
> 
> I see your "although" clause to be quite a strong objection. In the
> statement form of an if, you have:

I’m not convinced that this is a strong objection. The order of keywords is 
different, but that doesn’t have to be problem.

> 
> if cond: true_suite
> else: false_suite
> 
> In the expression form, you have:
> 
> true_expr if cond else false_expr

[…]

> 
> 
> Putting "try" followed by the default is confusing, because any
> exception raised in the default-expr will bubble up. Stealing any
> other keyword from the try/except block would make just as little
> sense:
> 
> expr except Exception finally default # "finally" implies something
> that always happens
> expr except Exception else default # "else" implies *no* exception
> expr except Exception try default # "try" indicates the initial expr,
> not the default

I didn’t parse the expression this way at all, but quite naturally parsed is as 
“use expr, and try using default if expr raises Exception” and not as a RTL 
expression.  

> default except Exception try expr # breaks L->R evaluation order
> 
> Left to right evaluation order is extremely important to me.

I agree with that, RTL evaluation would be pretty odd in Python.


> I don't
> know about anyone else, but since I'm the one championing the PEP,
> you're going to have to show me a *really* strong incentive to reword
> it to advocate something like the last one :) This is stated in the
> PEP:
> 
> http://www.python.org/dev/peps/pep-0463/#alternative-proposals
> 
> Using try and except leaves the notation "mentally ambiguous" as to
> which of the two outer expressions is which. It doesn't make perfect
> sense either way, and I expect a lot of people would be flicking back
> to the docs constantly to make sure they had it right.

Really? The evaluation order you mention in above didn’t make sense to me until 
I tried to make sense of it.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cffi in stdlib

2013-02-26 Thread Ronald Oussoren

On 26 Feb, 2013, at 16:13, Maciej Fijalkowski  wrote:

> Hello.
> 
> I would like to discuss on the language summit a potential inclusion
> of cffi[1] into stdlib. 

The API in general looks nice, but I do have some concens w.r.t. including cffi 
in the stdlib.

1. Why is cffi completely separate from ctypes, instead of layered on top of 
it? That is, add a utility module to ctypes that can parse C declarations and 
generate the right ctypes definitions. 

2. Cffi has a dependencies on pycparser and that module and its dependencies 
would therefore also be added to the stdlib (even if they'd be hidden in the 
cffi package)

3. Cffi basicly contains a (limited) C parser, and those are notoriously hard 
to get exactly right. Luckily cffi only needs to interpret declarations and not 
the full language, but even so this can be a risk of subtle bugs.

4. And finally a technical concern: how well does cffi work with fat binaries 
on OSX? In particular, will the distutils support generate cached data for all 
architectures supported by a fat binary?

Also, after playing around with it for 5 minutes I don't quite understand how 
to use it. Let's say I want to wrap a function "CGPoint CGPointMake(CGFloat x, 
CGFloat y)". Is is possible to avoid mentioning the exact typedef for CGFloat 
somewhere? I tried using:

   ffi.cdef("typedef ... CGFloat; typedef struct { CGFloat x; CGFloat y; } 
CGPoint; CGPoint CGPointMake(CGFloat x, CGFloat y);")

But that results in an error when calling verify:

   TypeError: field 'struct $CGPoint.x' has ctype 'struct $CGFloat' of unknown 
size

>From a first glance this doesn't seem to buy me that much w.r.t. ctypes, I 
>still have to declare the actual type of CGFloat which is documented as "some 
>floating point type".

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cffi in stdlib

2013-02-27 Thread Ronald Oussoren

On 27 Feb, 2013, at 10:06, Maciej Fijalkowski  wrote:

> On Wed, Feb 27, 2013 at 9:29 AM, Ronald Oussoren  
> wrote:
>> 
>> On 26 Feb, 2013, at 16:13, Maciej Fijalkowski  wrote:
>> 
>>> Hello.
>>> 
>>> I would like to discuss on the language summit a potential inclusion
>>> of cffi[1] into stdlib.
>> 
>> The API in general looks nice, but I do have some concens w.r.t. including 
>> cffi in the stdlib.
>> 
>> 1. Why is cffi completely separate from ctypes, instead of layered on top of 
>> it? That is, add a utility module to ctypes that can parse C declarations 
>> and generate the right ctypes definitions.
> 
> Because ctypes API is a mess and magic. We needed a cleaner (and much
> smaller) model.

The major advantages of starting over is probably that you can hide the 
complexity and that opens opportunities for optimizations. That said, I'm not 
convinced that ctypes is unnecessarily complex.
> 
>> 
>> 2. Cffi has a dependencies on pycparser and that module and its dependencies 
>> would therefore also be added to the stdlib (even if they'd be hidden in the 
>> cffi package)
> 
> Yes. pycparser and ply.

Which aren't part of the stdlib right now.

> 
>> 
>> 3. Cffi basicly contains a (limited) C parser, and those are notoriously 
>> hard to get exactly right. Luckily cffi only needs to interpret declarations 
>> and not the full language, but even so this can be a risk of subtle bugs.
> 
> It seems to work.

That's not a confidency inspiring comment :-).  That said,  I use a hacked up 
fork of pycparser to parse Apple's Cocoa headers for PyObjC and it appears to 
work fine for that.

> 
>> 
>> 4. And finally a technical concern: how well does cffi work with fat 
>> binaries on OSX? In particular, will the distutils support generate cached 
>> data for all architectures supported by a fat binary?
> 
> no idea.

That's somehting that will have to be resolved before cffi can be included in 
the stdlib, fat binaries are supported by CPython and are used the binary 
installers.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL.

2013-03-18 Thread Ronald Oussoren

On 18 Mar, 2013, at 8:16, Larry Hastings  wrote:

> 
> This has some consequences.  For example, inspect.getfullargspec, 
> inspect.Signature, and indeed types.FunctionObject and types.CodeObject have 
> no currently defined mechanism for communicating that a parameter is 
> positional-only.  I strongly assert we need such a mechanism, though it could 
> be as simple as having the parameter name be an empty string or None.

inspect.Signature does have support for positional-only arguments, they have 
inspect.Parameter.POSITIONAL_ONLY as their kind.  The others probably don't 
have support for this kind of parameters because there is no Python syntax for 
creating them.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] peps: New DSL syntax and slightly changed semantics for the Argument Clinic DSL.

2013-03-18 Thread Ronald Oussoren

On 18 Mar, 2013, at 23:43, Larry Hastings  wrote:

> On 03/18/2013 02:29 AM, Ronald Oussoren wrote:
>> On 18 Mar, 2013, at 8:16, Larry Hastings  wrote:
>>> This has some consequences.  For example, inspect.getfullargspec, 
>>> inspect.Signature, and indeed types.FunctionObject and types.CodeObject 
>>> have no currently defined mechanism for communicating that a parameter is 
>>> positional-only.
>> inspect.Signature does have support for positional-only arguments, they have 
>> inspect.Parameter.POSITIONAL_ONLY as their kind.
> 
> You're right!  And I should have remembered that--I was one of the authors of 
> the inspect.Signature PEP.  It's funny, it can represent something that it 
> has no way of inferring ;-)

It doesn't necessarily have to, builtin functions could grow a __signature__ 
attribute that calculates the signature (possibly from the DSL data). I've done 
something like that in a pre-release version of PyObjC, and with some patching 
of pydoc and inspect (see #17053) I now have useful help information for what 
are basicly builtin functions with positional-only arguments.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rough idea for adding introspection information for builtins

2013-03-19 Thread Ronald Oussoren

On 19 Mar, 2013, at 10:24, Larry Hastings  wrote:
> 
> 
>>> We'd want one more mild hack: the DSL will support positional 
>>> parameters, and inspect.Signature supports positional parameters, so 
>>> it'd be nice to render that information.  But we can't represent that in 
>>> Python syntax (or at least not yet!), so we can't let ast.parse see it. 
>>> My suggestion: run it through ast.parse, and if it throws a SyntaxError 
>>> see if the problem was a slash.  If it was, remove the slash, reprocess 
>>> through ast.parse, and remember that all parameters are positional-only 
>>> (and barf if there are kwonly, args, or kwargs). 
>> 
>> It will be simpler to use some one-character separator which shouldn't be 
>> used unquoted in the signature. I.e. LF.
> 
> I had trouble understanding what you're suggesting.  What I think you're 
> saying is, "normally these generated strings won't have LF in them.  So let's 
> use LF as a harmless extra character that means 'this is a positional-only 
> signature'."
> 
> At one point Guido suggested / as syntax for exactly this case.  And while 
> the LF approach is simpler programmatically, removing the slash and reparsing 
> isn't terribly complicated; this part will be in Python, after all.  
> Meanwhile, I suggest that for human readability the slash is way more 
> obvious--having a LF in the string mean this is awfully subtle.

You could also add the slash to the start of the signature, for example 
"/(arg1, arg2)", that way the positional only can be detected without trying to 
parse it first and removing a slash at the start is easier than removing it 
somewhere along a signature with arbitrary default values, such as "(arg1='/', 
arg2=4 /) -> 'arg1/arg2'".  The disadvantage is that you can't specify that 
only some of the arguments are positional-only, but that's not supported by 
PyArg_Parse... anyway.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How to fix the incorrect shared library extension on linux for 3.2 and newer?

2013-04-10 Thread Ronald Oussoren

On 4 Apr, 2013, at 14:46, Julian Taylor  wrote:

> The values on macos for these variables still look wrong in 3.3.1rc1:
> 
> ./configure --prefix=/Users/jtaylor/tmp/py3.3.1 --enable-shared
> on macosx-10.8-x86_64
> 
> sys.version_info(major=3, minor=3, micro=1, releaselevel='candidate', 
> serial=1)
> SO .so
> EXT_SUFFIX .so
> SHLIB_SUFFIX 0
> 
> 
> the only correct one here is EXT_SUFFIX, SHLIB_SUFFIX should be .dylib 
> (libpython is a .dylib) and .SO possibly too given for what it was used in 
> the past.

SO is explicitly defined as being the same as EXT_SUFFIX (in Makefile.pre.in 
for 3.3), and is gone in default. 

I'm not sure that SHLIB_SUFFIX is supposed to be because it isn't used other 
than to calculate the suffix to use for extensions and that shouldn't change on 
OSX for backward compatiblity reasons, and if it were changed I'd much rather 
see it changes to something like '.pyext' instead of '.dylib'. But that ship 
has long sailed, the very limited advantages of using a unique filename suffix 
for Python extensions isn't worth the very real breakage of actually changing 
it :-)

Oh, and at least setup.py assumes that 
sysconfig.get_config_var('EXT_SUFFIX').endswith(sysconfig.get_config_var('SHLIB_SUFFIX')).

BTW. This is a problem for a lot of the information you can retrieve with 
sysconfig.get_config_var(), a large subset of the information is only useful 
during the build/installation of Python itself and as none of them are actually 
documented using sysconfig.get_config_var() is somewhat of a black art.

> 
> 3.3.0 also returns wrong values
> SO .so
> EXT_SUFFIX None
> SHLIB_SUFFIX ""

Could you file an issue on the tracker about this? 

Ronald


> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] casefolding in pathlib (PEP 428)

2013-04-12 Thread Ronald Oussoren

On 12 Apr, 2013, at 10:39, Antoine Pitrou  wrote:
>> 
>> 
>> Perhaps it would be best if the code never called lower() or upper()
>> (not even indirectly via os.path.normcase()). Then any case-folding
>> and path-normalization bugs are the responsibility of the application,
>> and we won't have to worry about how to fix the stdlib without
>> breaking backwards compatibility if we ever figure out how to fix this
>> (which I somehow doubt we ever will anyway :-).
> 
> Ok, I've taken a look at the code. Right now lower() is used for two
> purposes:
> 
> 1. comparisons (__eq__ and __ne__)
> 2. globbing and matching
> 
> While (1) could be dropped, for (2) I think we want glob("*.py") to find
> "SETUP.PY" under Windows. Anything else will probably be surprising to
> users of that platform.

Globbing necessarily accesses the filesystem and could in theory do the
right thing, except for the minor detail of there not being an easy way
to determine of the names in a particular folder are compared case sensitive
or not. 

> 
>> - On Linux, paths are really bytes; on Windows (at least NTFS), they
>> are really (16-bit) Unicode; on Mac, they are UTF-8 in a specific
>> normal form (except on some external filesystems).
> 
> pathlib is just relying on Python 3's sane handling of unicode paths
> (thanks to PEP 383). Bytes paths are never used internally.

At least for OSX the kernel will normalize names for you, at least for HFS+,
and therefore two names that don't compare equal with '==' can refer to the
same file (for example the NFKD and NFKC forms of Löwe). 

Isn't unicode fun :-)

Ronald

> 
>> - On Windows, short names are still supported, making the number of
>> ways to spell the path for any given file even larger.
> 
> They are still supported but I doubt they are still relied on (long
> filenames appeared in Windows 95!). I think in common situations we can
> ignore their existence. Specialized tools like Mercurial may have to
> know that they exist, in order to manage potential collisions (but
> Mercurial isn't really the target audience for pathlib, and I don't
> think they would be interested in such an abstraction).
> 
> Regards
> 
> Antoine.
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] casefolding in pathlib (PEP 428)

2013-04-12 Thread Ronald Oussoren

On 12 Apr, 2013, at 15:00, Christian Heimes  wrote:

> Am 12.04.2013 14:43, schrieb Ronald Oussoren:
>> At least for OSX the kernel will normalize names for you, at least for HFS+,
>> and therefore two names that don't compare equal with '==' can refer to the
>> same file (for example the NFKD and NFKC forms of Löwe). 
>> 
>> Isn't unicode fun :-)
> 
> Seriously, the OSX kernel normalizes unicode forms? It's a cool feature
> and makes sense for the user's POV but ... WTF?

IIRC only for HFS+ filesystems, it is possible to access files on an NFS share
where the filename encoding isn't UTF-8.

> 
> Perhaps we should use the platform's API for the job. Does OSX offer an
> API function to create a case folded and canonical form of a path?
> Windows has PathCchCanonicalizeEx().

This would have to be done on a per path element case, because every directory
in a file's path could be on a separate filesystem with different conventions
(HFS+, HFS+ case sensitive, NFS mounted unix filesystem).

I have found sample code that can determine if a directory is on a case 
sensitive
filesystem (attached to 
<http://lists.apple.com/archives/darwin-dev/2007/Apr/msg00036.html>,
doesn't work in a 64-binary but I haven't check yet why is doesn't work there). 

I don'tknow if there is a function to determine the filesystem encoding, I 
guess 
assuming that the special casing is only needed for HFS+ variants could work 
but 
I'd have test that.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] casefolding in pathlib (PEP 428)

2013-04-12 Thread Ronald Oussoren

On 12 Apr, 2013, at 16:59, Antoine Pitrou  wrote:

> Le Fri, 12 Apr 2013 14:43:42 +0200,
> Ronald Oussoren  a écrit :
>> 
>> On 12 Apr, 2013, at 10:39, Antoine Pitrou  wrote:
>>>> 
>>>> 
>>>> Perhaps it would be best if the code never called lower() or
>>>> upper() (not even indirectly via os.path.normcase()). Then any
>>>> case-folding and path-normalization bugs are the responsibility of
>>>> the application, and we won't have to worry about how to fix the
>>>> stdlib without breaking backwards compatibility if we ever figure
>>>> out how to fix this (which I somehow doubt we ever will anyway :-).
>>> 
>>> Ok, I've taken a look at the code. Right now lower() is used for two
>>> purposes:
>>> 
>>> 1. comparisons (__eq__ and __ne__)
>>> 2. globbing and matching
>>> 
>>> While (1) could be dropped, for (2) I think we want glob("*.py") to
>>> find "SETUP.PY" under Windows. Anything else will probably be
>>> surprising to users of that platform.
>> 
>> Globbing necessarily accesses the filesystem and could in theory do
>> the right thing, except for the minor detail of there not being an
>> easy way to determine of the names in a particular folder are
>> compared case sensitive or not. 
> 
> It's also much less efficient, since you have to stat() every potential
> match. e.g. when encountering "SETUP.PY", you would have to stat() (or,
> rather, lstat()) both "setup.py" and "SETUP.PY" to check if they have
> the same st_ino.

I found a way to determine if names in a directory are stored case sensitive,
at least on OSX. That way you'd only have to perform one call for the directory,
or one call per path element that contains wildcard characters for glob.glob.

That API is definitly platform specific.

> 
>> At least for OSX the kernel will normalize names for you, at least
>> for HFS+, and therefore two names that don't compare equal with '=='
>> can refer to the same file (for example the NFKD and NFKC forms of
>> Löwe). 
> 
> I don't think differently normalized filenames are as common on OS X as
> differently cased filenames are on Windows, right?

The problem is more that HFS+ stores names with decomposed characters,
which basicly means that accents are stored separate from their base
characters. In most input the accented character will be one character,
and hence a naieve comparison like this could fail to match:

.> name = input()
.> for fn in os.listdir('.'):
.>   if fn.lower() == name.lower():
.>  print("Found {} in the current directory".format(name))

Ronald

> 
> Regards
> 
> Antoine.
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] libffi inclusion in python

2013-04-18 Thread Ronald Oussoren
On 18 apr. 2013, at 18:09, Ned Deily  wrote:

> In article 
> ,
> Benjamin Peterson  wrote:
>> 2013/4/18 Maciej Fijalkowski :
>>> libffi has bugs sometimes (like this
>>> http://bugs.python.org/issue17580). Now this is a thing that upstream
>>> fixes really quickly, but tracking down issues on bugs.python.org is
>>> annoying (they never get commited as quickly as the upstream). is
>>> there a good reason why cpython has it's own copy of libffi? I
>>> understand historical reasons, but PyPy gets along relying on the
>>> system library, so maybe we can kill the inclusion.
>> 
>> IIRC, it had (has?) some custom windows patches, which no one knows
>> whether they're relevant or not.
> 
> The cpython copy also has custom OS X patches.  I've never looked at 
> them so I don't have a feel for how much work would be involved in 
> migrating to current upstream.  If it's just a matter of supporting 
> universal builds, it could be done with some Makefile hacking to do a 
> lipo dance.  Ronald, any additional thoughts?

It is probably just a matter of supporting universal builds, but I haven't 
checked the state of upstream libffi in the last couple of years. 

The libffi_osx tree is a fork from upstream that started around the time of OSX 
10.4, and possibly earlier. As Thomas mentioned the upstream maintainer weren't 
very responsive in the past, at the time I hacked on libffi (IIRC for 
Darwin/i386 support) it wasn't even clear how the maintainers could be reached. 

Stripping libffi from python's source tree would be fine by me, but would 
require testing with upstream libffi. AFAIK system libffi on osx wouldn't be 
goog enough, it doesn't work properly with clang. 

Ronald


> 
> http://bugs.python.org/issue15194
> 
> Currently, for the OS X installer builds, we build a number of 
> third-party libs that are either missing from OS X (like lzma) or are 
> too out-of-date on the oldest systems we support.  It would be useful to 
> generalize the third-party lib support and move it out of the installer 
> build process so that all builds could take advantage of the libs if 
> needed.  libffi could be added to those.  Of course, that wouldn't help 
> for Windows builds.
> 
> -- 
> Ned Deily,
> n...@acm.org
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] libffi inclusion in python

2013-04-19 Thread Ronald Oussoren

On 19 Apr, 2013, at 10:13, Dirkjan Ochtman  wrote:

> On Thu, Apr 18, 2013 at 11:17 PM, Ronald Oussoren
>  wrote:
>> Stripping libffi from python's source tree would be fine by me, but would 
>> require testing with upstream libffi. AFAIK system libffi on osx wouldn't be 
>> goog enough, it doesn't work properly with clang.
> 
> If you mean http://bugs.python.org/issue17136, I think that has been
> fixed in libffi upstream?

Appearently, but not in /usr/lib/libffi.dylib (at least nog when I checked it 
last time, which was at least one minor release of 10.8 ago). Which means that 
the binary installers will still have to use a bundled copy of libffi, but as 
Ned wrote that's not really a problem because the script that builds those 
installers already includes some other libraries that are not available on OSX 
or are too old.

That said, I haven't tested yet if upstream libffi actually works although 
others seem to have.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): backported rev 79713 from 3.4, test_recursion_limit skipped for -O0

2013-04-23 Thread Ronald Oussoren

On 24 Apr, 2013, at 1:32, "lukasz.langa"  wrote:

> http://hg.python.org/cpython/rev/9755036c81d0
> changeset:   83510:9755036c81d0
> branch:  3.3
> parent:  83508:44d764238f0d
> user:Łukasz Langa 
> date:Wed Apr 24 01:29:26 2013 +0200
> summary:
>  backported rev 79713 from 3.4, test_recursion_limit skipped for -O0
> 
> files:
>  Lib/test/test_threading.py |  3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> 
> diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py
> --- a/Lib/test/test_threading.py
> +++ b/Lib/test/test_threading.py
> @@ -754,7 +754,8 @@
> lock = threading.Lock()
> self.assertRaises(RuntimeError, lock.release)
> 
> -@unittest.skipUnless(sys.platform == 'darwin', 'test macosx problem')
> +@unittest.skipUnless(sys.platform == 'darwin' and 
> test.support.python_is_optimized(),
> + 'test macosx problem')

Wouldn't it be better to just fix the issue? thread_pthread already sets an 
explicit stack size on OSX, but that value is appearently too small.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): backported rev 79713 from 3.4, test_recursion_limit skipped for -O0

2013-04-23 Thread Ronald Oussoren

On 24 Apr, 2013, at 8:14, Ronald Oussoren  wrote:

> 
> On 24 Apr, 2013, at 1:32, "lukasz.langa"  wrote:
> 
>> http://hg.python.org/cpython/rev/9755036c81d0
>> changeset:   83510:9755036c81d0
>> branch:  3.3
>> parent:  83508:44d764238f0d
>> user:Łukasz Langa 
>> date:Wed Apr 24 01:29:26 2013 +0200
>> summary:
>> backported rev 79713 from 3.4, test_recursion_limit skipped for -O0
>> 
>> files:
>> Lib/test/test_threading.py |  3 ++-
>> 1 files changed, 2 insertions(+), 1 deletions(-)
>> 
>> 
>> diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py
>> --- a/Lib/test/test_threading.py
>> +++ b/Lib/test/test_threading.py
>> @@ -754,7 +754,8 @@
>>lock = threading.Lock()
>>self.assertRaises(RuntimeError, lock.release)
>> 
>> -@unittest.skipUnless(sys.platform == 'darwin', 'test macosx problem')
>> +@unittest.skipUnless(sys.platform == 'darwin' and 
>> test.support.python_is_optimized(),
>> + 'test macosx problem')
> 
> Wouldn't it be better to just fix the issue? thread_pthread already sets an 
> explicit stack size on OSX, but that value is appearently too small.

In particular, this patch appears to fix the crash that's the reason for 
disabling the test:

diff --git a/Python/thread_pthread.h b/Python/thread_pthread.h
--- a/Python/thread_pthread.h
+++ b/Python/thread_pthread.h
@@ -28,7 +28,7 @@
  */
 #if defined(__APPLE__) && defined(THREAD_STACK_SIZE) && THREAD_STACK_SIZE == 0
 #undef  THREAD_STACK_SIZE
-#define THREAD_STACK_SIZE   0x50
+#define THREAD_STACK_SIZE   0x55
 #endif
 #if defined(__FreeBSD__) && defined(THREAD_STACK_SIZE) && THREAD_STACK_SIZE == 0
 #undef  THREAD_STACK_SIZE

Without this patch test_recursion_limit fails due to a crash, with the patch 
the test passes (debug build, x86_64, OSX 10.8.3).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issue 11406: adding os.scandir(), a directory iterator returning stat-like info

2013-05-10 Thread Ronald Oussoren

On 10 May, 2013, at 14:16, Antoine Pitrou  wrote:

> Le Fri, 10 May 2013 13:46:30 +0200,
> Christian Heimes  a écrit :
>> 
>> Hence I'm +1 on the general idea but -1 on something stat like. IMHO
>> os.scandir() should yield four objects:
>> 
>> * name
>> * inode
>> * file type or DT_UNKNOWN
>> * stat_result or None
>> 
>> stat_result shall only be returned when the operating systems
>> provides a full stat result as returned by os.stat().
> 
> But what if some systems return more than the file type and less than a
> full stat result? The general problem is POSIX's terrible inertia.
> I feel that a stat result with some None fields would be an acceptable
> compromise here.

But how do you detect that the st_mode field on systems with a d_type is 
incomplete, as oposed to a system that can return a full st_mode from its 
readdir equivalent and where the permission bits happen to be 0o? One 
option would be to add a file type field to stat_result, IIRC this was 
mentioned in some revisions of the extended stat_result proposal over on 
python-ideas.

Ronald

> 
> Regards
> 
> Antoine.
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issue 11406: adding os.scandir(), a directory iterator returning stat-like info

2013-05-10 Thread Ronald Oussoren

On 10 May, 2013, at 15:54, Antoine Pitrou  wrote:

> Le Fri, 10 May 2013 15:46:21 +0200,
> Christian Heimes  a écrit :
> 
>> Am 10.05.2013 14:16, schrieb Antoine Pitrou:
>>> But what if some systems return more than the file type and less
>>> than a full stat result? The general problem is POSIX's terrible
>>> inertia. I feel that a stat result with some None fields would be
>>> an acceptable compromise here.
>> 
>> POSIX only defines the d_ino and d_name members of struct dirent.
>> Linux, BSD and probably some other platforms also happen to provide
>> d_type. The other members of struct dirent (d_reclen, d_namlen)
>> aren't useful in Python space by themselves.
>> 
>> d_type and st_mode aren't compatible in any way. As you know st_mode
>> also contains POSIX permission information. The file type is encoded
>> with a different set of bits, too. Future file types aren't mapped to
>> S_IF* constants for st_mode.
> 
> Thank you and Ronald for clarifying. This does make the API design a
> bit bothersome. We want to expose as much information as possible in a
> cross-platform way and with a flexible granularity, but doing so might
> require a gazillion of namedtuple fields (platonically, as much as one
> field per stat bit).

One field per stat bit is overkill, file permissions are well known enough
to keep them as a single item. 

Most if not all uses of the st_mode field can be covered by adding just
"filetype" and "permissions" fields. That would also make it possible to
use stat_result in os.scandir() without loosing information (it would 
have filetype != None and permissions and st_mode == None on systems with
d_type).

> 
>> For d_ino you also need the device number from the directory because
>> the inode is only unique within a device.
> 
> But hopefully you've already stat'ed the directory ;)

Why? There's no need to stat the directory when implementing os.walk using
os.scandir (for systems that return filetype information in the API used
by os.scandir).  Anyway, setting st_ino in the result of os.scandir is
harmless, even though using st_ino is uncommon.

Getting st_dev from the directory isn't good anyway, for example when
using rebind mounts to mount a single file into a different directory (which 
is a convenient way to make a configuration file available in a chroot 
environment)

Ronald

> 
> Regards
> 
> Antoine.
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Issue 11406: adding os.scandir(), a directory iterator returning stat-like info

2013-05-10 Thread Ronald Oussoren

On 10 May, 2013, at 16:30, MRAB  wrote:
>> 
> [snip]
> In the python-ideas list there's a thread "PEP: Extended stat_result"
> about adding methods to stat_result.
> 
> Using that, you wouldn't necessarily have to look at st.st_mode. The method 
> could perform an additional os.stat() if the field was None. For
> example:
> 
> # Build lists of files and directories in path
> files = []
> dirs = []
> for name, st in os.scandir(path):
> if st.is_dir():
> dirs.append(name)
> else:
> files.append(name)
> 
> That looks much nicer.

I'd prefer a filetype field, with 'st.filetype == "dir"' instead of 
'st.is_dir()'. The actual type of filetype values is less important, an enum 
type would also work although bootstrapping that type could be interesting.

Ronald

> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Segmentation fault on 3.4 with --pydebug

2013-05-30 Thread Ronald Oussoren

On 30 May, 2013, at 13:08, Łukasz Langa  wrote:

> This happens after Benjamin's changes in 83937. Anybody else seeing this?
> 
> Intel i5 2.4 GHz, Mac OS X 10.8.3, clang
> 
> $ hg up default
> $ make distclean
> $ MACOSX_DEPLOYMENT_TARGET=10.8 ./configure --with-pydebug
> $ make
> $ ./python.exe -Wd -m test.regrtest test_exceptions
> [1/1] test_exceptions
> Fatal Python error: Segmentation fault
> 
> Current thread 0x7fff74254180:
>   File 
> "/Users/ambv/Documents/Projekty/Python/cpython/py34/Lib/test/test_exceptions.py",
>  line 453 in f
>   File 
> "/Users/ambv/Documents/Projekty/Python/cpython/py34/Lib/test/test_exceptions.py",
>  line 453 in f
>   File 
> "/Users/ambv/Documents/Projekty/Python/cpython/py34/Lib/test/test_exceptions.py",
>  line 453 in f
>   ... (repeated a 100 times)
> Command terminated abnormally.
> 
> 
> 
> Everything runs fine without --with-pydebug (or before 83937 with 
> --with-pydebug).

Issue #18075 contains a patch. I probably won't have time to commit until 
sunday, but feel free to apply the patch yourself :-)

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Putting the Mac Build in the Apple App Store

2013-06-04 Thread Ronald Oussoren

On 4 Jun, 2013, at 6:44, Raymond Hettinger  wrote:

> Does anyone know what we would need to do to get Python in the Apple 
> application store as a free App?
> 
> The default security settings on OS X 10.8 block the installation of the DMG 
> (or any software downloaded outside the app store).   A number of my students 
> are having difficulty getting around it will help.
> 
> If we were in the app store, installation and upgrade would be a piece of 
> cake.

A problem with the app store is that the Python installation should then be an 
app (for example IDLE.app), and that the application must be sandboxed. The 
latter is showstopper, as scripts run with the interpreter would be sandboxed 
as well and hence couldn't access most of the system.

A better solution for the problem with OSX 10.8's security settings it sign the 
installer with a developer ID. It can then be opened by double clicking because 
the app is provided by an "identified developer".  A problem with signing the 
installer is that this requires changes to the installer, we're currently using 
an ancient installer format that cannot be signed. That should be changed some 
time in the future anyway and signing the installer could be a good reason to 
work on that.

BTW. There is a workaround that makes it possible to install without signing 
the installer: right-click on the installer and select "open" (instead of 
double clicking the installer). The system will then give a scary warning, but 
will allow installation anyway.

Ronald
> 
> 
> Raymond
> 
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Validating SSL By Default (aka Including a Cert Bundle in CPython)

2013-06-04 Thread Ronald Oussoren

On 3 Jun, 2013, at 7:58, Benjamin Peterson  wrote:

> 2013/6/2 Donald Stufft :
>> As of right now, as far as I can tell, Python does not validate HTTPS
>> certificates by default. As far as I can tell this is because there is no
>> guaranteed certificates available.
>> 
>> So I would like to propose that CPython adopt the Mozilla SSL certificate
>> list and include it in core, and switch over the API's so that they verify
>> HTTPS by default.
> 
> +1
> 
>> 
>> Ideally this would take the shape of attempting to locate the system
>> certificate store if possible, and if that doesn't work falling back to the
>> bundled certificates. That way the various Linux distros can easily have
>> their copies of Python depend soley on their built in certs, but Windows,
>> OSX, Source compiles etc will all still have a fallback value.
> 
> My preference would be actually be for the included certificates file
> to be used by default. This would provide a consistent experience
> across platforms. We could provide options to look for system cert
> repositories if desired.

I'd prefer to use the system CA list when that's available. I've had to hunt 
down the CA list for a number of application when a custom CA for internal use 
and that's not fun, using the system list is much friendlier to users. 

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] End of the mystery "@README.txt Mercurial bug"

2013-06-26 Thread Ronald Oussoren

On 26 Jun, 2013, at 14:18, Eric V. Smith  wrote:

> On 6/26/2013 6:43 AM, a.cava...@cavallinux.eu wrote:
>> .. or having hg "purging" unwanted build artifact (probably cleaning up
>> the .hgignore file first)
> 
> How would that work? How could hg purge the .bak, .orig, .rej, .old,
> etc. files?
> 
 find $(srcdir)/* ...
 
 to avoid this problem. It won't expand the .hg top-level directory.
>>> 
>>>   Or find \( -type d -name .hg -prune \) -o ...
> 
> I'm torn. Yours is more obvious, but we'd likely need to add .svn, .git,
> etc. Maybe find $(srcdir)/[a-zA-Z]* ... would be good enough to ignore
> all dot directories/files?

Is the find command in the distclean target still needed? The comment for the 
distclean target says it is used to clean up the tree for distribution, but 
that's easier to accomplish by using a clean checkout.

The target is still useful to get a clean tree when you're building with srcdir 
== builddir, but you don't need the find command for that.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] End of the mystery "@README.txt Mercurial bug"

2013-06-26 Thread Ronald Oussoren

On 26 Jun, 2013, at 15:39, Barry Warsaw  wrote:

> On Jun 26, 2013, at 09:04 AM, Eric V. Smith wrote:
> 
>> I run 'make distclean' fairly often, but maybe it's just out of habit.
>> If I'm adding/deleting modules, I want to make sure there are no build
>> artifacts. And since I have modified files, a clean checkout won't help
>> (easily, at least).
> 
> As do I.  I think it still makes sense for us to include a working distclean,
> especially since it's a very common target for make-based builds.

Sure, but is it necessary to run the find command for removing backup files in 
make distclean?

When the find command is removed you'd still end up with a tree that's clean 
enough to perform a build from scratch, although the tree won't be perfectly 
clean. 

BTW. I usually build in a separate directory, that makes cleaning up even 
easier :-)

Ronald
> 
> -Barry
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Hooking into super() attribute resolution

2013-07-02 Thread Ronald Oussoren
Hi,

Below is a very preliminary draft PEP for adding a special method that can be 
used to hook into the attribute resolution process of the super object.

The primary usecase for using this special method are classes that perform 
custom logic in their __getattribute__ method, where the default behavior of 
super (peekin the the class __dict__) is not appropriate.  The primary reason I 
wrote this proposal is PyObjC: it dynamicly looks up methods in its 
__getattribute__ and caches the result in the class __dict__, because of this 
super() will often not work correctly and therefore I'm currently shipping a 
custom subclass of super() that basicly contains an in-line implementation of 
the hook that would be used by PyObjC.

I have a partial implementation of the hook system in issue 18181  and a PyObjC 
patch that uses it. The implementation currently does not contain tests, and 
I'm sure that I'll find edge cases that I haven't thought about yet when I add 
tests. 

Ronald




PEP: TODO
Title: Hooking into super attribute resolution
Version: $Revision$
Last-Modified: $Date$
Author: Ronald Oussoren 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Jun-2013
Post-History: 2-Jul-2013


Abstract


In current python releases the attribute resolution of the `super class`_
peeks in the ``__dict__`` attribute of classes on the MRO to look
for attributes. This PEP introduces a hook that classes can use
to override that behavior for specific classes.


Rationale
=

Peeking in the class ``__dict__`` works for regular classes, but can
cause problems when a class dynamicly looks up attributes in a
``__getattribute__`` method.

The new hook makes it possible to introduce the same customization for
attribute lookup through the `super class`_.


The superclass attribute lookup hook


In C code
-

A new slot ``tp_getattro_super`` is added to the ``PyTypeObject`` struct. The
``tp_getattro`` slot for super will call this slot when it is not ``NULL``,
otherwise it will peek in the class ``tp_dict``.

The slot has the following prototype::

PyObject* (*getattrosuperfunc)(PyTypeObject* tp, PyObject* self, PyObject* 
name);

The function should perform attribute lookup for *name*, but only looking in
type *tp* (which will be one of the types on the MRO for *self*) and without 
looking
in the instance *__dict__*.

The function returns ``NULL`` when the attribute cannot be found, and raises and
exception. Exception other than ``AttributeError`` will cause failure of super's
attribute resolution.


In Python code
--

A Python class can contain a definition for a method ``__getattribute_super__`` 
with
the following prototype::

   def __getattribute_super__(self, cls, name): pass

The method should perform attribute lookup for *name* on instance *self* while 
only
looking at *cls* (it should not look in super classes or the instance *__dict__*


Alternative proposals
-

Reuse ``tp_getattro``
.

It would be nice to avoid adding a new slot, thus keeping the API simpler and 
easier
to understand.  A comment on `Issue 18181`_ asked about reusing the 
``tp_getattro`` slot,
that is super could call the ``tp_getattro`` slot of all methods along the MRO.

AFAIK that won't work because ``tp_getattro`` will look in the instance 
``__dict__`` before
it tries to resolve attributes using classes in the MRO. This would mean that 
using
``tp_getattro`` instead of peeking the class dictionaries changes the semantics 
of
the `super class`_.


Open Issues
===

* The names of the new slot and magic method are far from settled.

* I'm not too happy with the prototype for the new hook.

* Should ``__getattribute_super__`` be a class method instead?


References
==

* `Issue 18181`_ contains a prototype implementation

  The prototype uses different names than this proposal.


Copyright
=

This document has been placed in the public domain.

.. _`Issue 18181`: http://bugs.python.org/issue18181

.. _`super class`: 
http://docs.python.org/3/library/functions.html?highlight=super#super
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 446: Add new parameters to configure the inherance of files and for non-blocking sockets

2013-07-04 Thread Ronald Oussoren

On 4 Jul, 2013, at 13:19, Victor Stinner  wrote:

> 2013/7/4 Victor Stinner :
>> Add a new optional *cloexec* on functions creating file descriptors:
>> 
>> * ``io.FileIO``
>> * ``io.open()``
>> * ``open()``
> 
> The PEP 433 proposes adding an "e" mode to open in alternatives. I
> didn't keep this idea because the fopen() function of the GNU libc
> library has no mode for the O_NONBLOCK flag. IMO it is not interesting
> to mention it in the PEP 466.

I don't understand your reasoning, that is what has GNU libc to do with adding 
"e" mode to io.open?

BTW. I have no particular fondness for an "e" flag, adding a clo_exec flag 
would be fine and I'm just curious.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] lament for the demise of unbound methods

2013-07-04 Thread Ronald Oussoren

On 4 Jul, 2013, at 13:21, Chris Withers  wrote:

> Hi All,
> 
> In Python 2, I can figure out whether I have a method or a function, and, 
> more importantly, for an unbound method, I can figure out what class the 
> method belongs to:
> 
> >>> class MyClass(object):
> ...   def method(self): pass
> ...
> >>> MyClass.method
> 
> >>> MyClass.method.im_class
> 
> 
> There doesn't appear to be any way in Python 3 to do this, which is a little 
> surprising and frustrating...
> 
> What am I missing here?

You can find the fully qualified name of the method with the qualname attribute:

>>> class A:
...def method(self): pass
... 
>>> A.method.__qualname__
'A.method'

Ronald
> 
> Chris
> 
> -- 
> Simplistix - Content Management, Batch Processing & Python Consulting
>- http://www.simplistix.co.uk
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hooking into super() attribute resolution

2013-07-06 Thread Ronald Oussoren
I've updated the implementation in issue 18181 
<http://bugs.python.org/issue18181> while adding some tests, and have updated 
the proposal as well. 

The proposal has some open issues at the moment, most important of which is the 
actual signature for the new special method; in particular I haven't been able 
to decide if this should be an instance-, class- or static method. It is a 
static method in the proposal and prototype, but I'm not convinced that that is 
the right solution.

Ronald




PEP: TODO
Title: Hooking into super attribute resolution
Version: $Revision$
Last-Modified: $Date$
Author: Ronald Oussoren 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Jun-2013
Post-History: 2-Jul-2013, ?


Abstract


In current python releases the attribute resolution of the `super class`_
peeks in the ``__dict__`` attribute of classes on the MRO to look
for attributes. This PEP introduces a hook that classes can use
to override that behavior for specific classes.


Rationale
=

Peeking in the class ``__dict__`` works for regular classes, but can
cause problems when a class dynamicly looks up attributes in a
``__getattribute__`` method.

The new hook makes it possible to introduce the same customization for
attribute lookup through the `super class`_.


The superclass attribute lookup hook


In C code
-

A new slot ``tp_getattro_super`` is added to the ``PyTypeObject`` struct. The
``tp_getattro`` slot for super will call this slot when it is not ``NULL``,
and will raise an exception when it is not set (which shouldn't happen because
the method is implemented for :class:`object`).

The slot has the following prototype::

PyObject* (*getattrosuperfunc)(PyTypeObject* cls, PyObject* name,
PyObject* object, PyObject* owner);

The function should perform attribute lookup on *object* for *name*, but only
looking in type *tp* (which will be one of the types on the MRO for *self*)
and without looking in the instance *__dict__*.

The function returns ``NULL`` when the attribute cannot be found, and raises and
exception. Exception other than ``AttributeError`` will cause failure of super's
attribute resolution.

The implementation of the slot for the :class:`object` type is
``PyObject_GenericGetAttrSuper``, which peeks in the ``tp_dict`` for *cls*.

Note that *owner* and *object* will be the same object when using a
class-mode super.


In Python code
--

A Python class can contain a definition for a static method
``__getattribute_super__`` with the following prototype::

   def __getattribute_super__(cls, name, object, owner): pass

The method should perform attribute lookup for *name* on instance *self* while
only looking at *cls* (it should not look in super classes or the instance
*__dict__*

XXX: I haven't got a clue at the moment if the method should be an
instance-, class- or staticmethod. The prototype uses a staticmethod.

XXX: My prototype automagicly makes this a static method, just like __new__ is
made into a static method. That's more convenient, but also (too?) magical.

XXX: Should this raise AttributeError or return a magic value to signal that
an attribute cannot be found (such as NotImplemented, used in the comparison
operators)? I'm currently using an exception, a magical return value would
be slightly more efficient because the exception machinery is not invoked.


Alternative proposals
-

Reuse ``tp_getattro``
.

It would be nice to avoid adding a new slot, thus keeping the API simpler and
easier to understand.  A comment on `Issue 18181`_ asked about reusing the
``tp_getattro`` slot, that is super could call the ``tp_getattro`` slot of all
methods along the MRO.

AFAIK that won't work because ``tp_getattro`` will look in the instance
``__dict__`` before it tries to resolve attributes using classes in the MRO.
This would mean that using ``tp_getattro`` instead of peeking the class
dictionaries changes the semantics of the `super class`_.


Open Issues
===

* The names of the new slot and magic method are far from settled.

* I'm not too happy with the prototype for the new hook.

* Should ``__getattribute_super__`` be a class method instead?

  -> Yes? The method looks up a named attribute name of an object in
 a specific class. Is also likely needed to deal with @classmethod
 and super(Class, Class)

* Should ``__getattribute_super__`` be defined on object?

  -> Yes: makes it easier to delegate to the default implementation

* This doesn't necessarily work for class method super class
   (e.g. super(object, object))...


References
==

* `Issue 18181`_ contains a prototype implementation


Copyright
=

This document has been placed in the public domain.

.. _`Issue 18181`: http://bugs.python.org/issue18181

.. _`super class`: 
ht

Re: [Python-Dev] [Python-checkins] cpython (3.3): Issue #17860: explicitly mention that std* streams are opened in binary mode by

2013-07-06 Thread Ronald Oussoren

On 6 Jul, 2013, at 13:59, R. David Murray  wrote:

> On Sat, 06 Jul 2013 10:25:19 +0200, ronald.oussoren 
>  wrote:
>> http://hg.python.org/cpython/rev/a2c2ffa1a41c
>> changeset:   84453:a2c2ffa1a41c
>> branch:  3.3
>> parent:  84449:df79735b21c1
>> user:Ronald Oussoren 
>> date:Sat Jul 06 10:23:59 2013 +0200
>> summary:
>>  Issue #17860: explicitly mention that std* streams are opened in binary 
>> mode by default.
>> 
>> The documentation does mention that the streams are opened in text mode
>> when univeral_newlines is true, but not that that they are opened in
>> binary mode when that argument is false and that seems to confuse at
>> least some users.
>> 
>> files:
>>  Doc/library/subprocess.rst |  6 --
>>  1 files changed, 4 insertions(+), 2 deletions(-)
>> 
>> 
>> diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst
>> --- a/Doc/library/subprocess.rst
>> +++ b/Doc/library/subprocess.rst
>> @@ -293,7 +293,8 @@
>>If *universal_newlines* is ``True``, the file objects *stdin*, *stdout* 
>> and
>>*stderr* will be opened as text streams in :term:`universal newlines` mode
>>using the encoding returned by :func:`locale.getpreferredencoding(False)
>> -   `.  For *stdin*, line ending characters
>> +   `, otherwise these streams will be opened
>> +   as binary streams.  For *stdin*, line ending characters
>>``'\n'`` in the input will be converted to the default line separator
>>:data:`os.linesep`.  For *stdout* and *stderr*, all line endings in the
>>output will be converted to ``'\n'``.  For more information see the
> 
> IMO, either the default should be mentioned first, or the default
> should be mentioned in a parenthetical.  Otherwise it sounds like
> newline translation is being done in both modes.  Logically that makes
> no sense, so the above construction will likely lead to, at a minimum,
> an interruption in the flow for the reader, and at worse even more
> confusion than not mentioning it at all.

You've got a point there. Converting the next text (", otherwise ...") to a 
parententical
seems to be the cleanest fix, creating a separate sentence for the ``False`` 
case introduces
duplication unless I restructure the text.

Ronald

> 
> --David
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 446: Add new parameters to configure the inherance of files and for non-blocking sockets

2013-07-06 Thread Ronald Oussoren

On 6 Jul, 2013, at 14:04, Victor Stinner  wrote:

> 2013/7/6 Charles-François Natali :
>>> I've read your "Rejected Alternatives" more closely and Ulrich
>>> Drepper's article, though I think the article also supports adding
>>> a blocking (default True) parameter to open() and os.open(). If you
>>> try to change that default on a platform where it doesn't work, an
>>> exception should be raised.
>> 
>> Contrarily to close-on-exec, non-blocking only applies to a limited
>> type of files (e.g. it doesn't work for regular files, which represent
>> 90% of open() use cases).
> 
> What do you mean by "does not work"? On Linux, O_NONBLOCK flag can be
> set on regular files, sockets, pipes, etc.

I guess he means that O_NONBLOCK doesn't actually do anything for regular
files, regular files are always ready for I/O as far as select et. al. are
concerned and I/O will block when data has to be read from disk (or in 
the case of a network filesystem, from another machine).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): Issue #17860: explicitly mention that std* streams are opened in binary mode by

2013-07-06 Thread Ronald Oussoren

On 6 Jul, 2013, at 14:09, Ronald Oussoren  wrote:

> 
> On 6 Jul, 2013, at 13:59, R. David Murray  wrote:
>> 
>> IMO, either the default should be mentioned first, or the default
>> should be mentioned in a parenthetical.  Otherwise it sounds like
>> newline translation is being done in both modes.  Logically that makes
>> no sense, so the above construction will likely lead to, at a minimum,
>> an interruption in the flow for the reader, and at worse even more
>> confusion than not mentioning it at all.
> 
> You've got a point there. Converting the next text (", otherwise ...") to a 
> parententical
> seems to be the cleanest fix, creating a separate sentence for the ``False`` 
> case introduces
> duplication unless I restructure the text.

I didn't like the parenthentical after all. Would this work for you?:

 
-   If *universal_newlines* is ``True``, the file objects *stdin*, *stdout* and
-   *stderr* will be opened as text streams in :term:`universal newlines` mode
+   If *universal_newlines* is ``False`` the file objects *stdin*, *stdout* and
+   *stderr* will be opened as binary streams, and no line ending conversion is 
done.
+
+   If *universal_newlines* is ``True``, these file objects
+   will be opened as text streams in :term:`universal newlines` mode
using the encoding returned by :func:`locale.getpreferredencoding(False)
-   `, otherwise these streams will be opened
-   as binary streams.  For *stdin*, line ending characters
+   `.  For *stdin*, line ending characters
``'\n'`` in the input will be converted to the default line separator
:data:`os.linesep`.  For *stdout* and *stderr*, all line endings in the
output will be converted to ``'\n'``.  For more information see the

That is, a new paragraph is added before the existing one to explain the 
behavior of
"not universal_newlines".

Ronald


> 
> Ronald
> 
>> 
>> --David
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: 
>> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rough idea for adding introspection information for builtins

2013-07-06 Thread Ronald Oussoren

On 6 Jul, 2013, at 19:33, Larry Hastings  wrote:
> 
> Once builtins have introspection information, pydoc can do a better job, and 
> Argument Clinic can stop generating its redundant prototype line.

Not entirely on topic, but close enough: pydoc currently doesn't use the 
__signature__ information at all. Adding such support would be easy enough, see 
#17053 for an implementation ;-)

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rough idea for adding introspection information for builtins

2013-07-06 Thread Ronald Oussoren

On 7 Jul, 2013, at 4:48, Larry Hastings  wrote:

> 
> If we combine that with the admittedly-new "/" indicating "all previous 
> parameters are positional-only",

Signature objects use a name in angled brackets to indicate that a parameter is 
positional only, for example "input()". That might be an alternative to 
adding a "/" in the argument list in pydoc's output.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rough idea for adding introspection information for builtins

2013-07-07 Thread Ronald Oussoren

On 7 Jul, 2013, at 13:35, Larry Hastings  wrote:
> 
> On 07/07/2013 07:25 AM, Ronald Oussoren wrote:
>> Signature objects use a name in angled brackets to indicate that a parameter 
>> is positional only, for example "input()". That might be an 
>> alternative to adding a "/" in the argument list in pydoc's output.
>> 
> 
> I wasn't aware that Signature objects currently had any support whatsoever 
> for positional-only parameters.  Yes, in theory they do, but in practice they 
> have never seen one, because positional-only parameters only occur in 
> builtins and Signature objects have no metadata for builtins.  (The very 
> problem Argument Clinic eventually hopes to solve!)
> 
> Can you cite an example of this, so I may examine it?

I have a branch of PyObjC that uses this: 
<https://bitbucket.org/ronaldoussoren/pyobjc-3.0-unstable/overview>. That 
branch isn't quite stable yet, but does add a __signature__ slot to 
objc.selector and objc.function (basicly methods of Cocoa classes and 
automaticly wrapped global functions), both of which only have positional-only 
arguments. With the patch for pydoc/inspect I mentioned earlier I can then 
generate somewhat useful documentation for Cocoa classes using pydoc.

A word of warning though: the PyObjC source code isn't the most approachable, 
the code that generates the Signature object is actually in python 
(callable_signature in pyobjc-core/Lib/objc/_callable_docstr.py)

Ronald

> 
> 
> /arry

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rough idea for adding introspection information for builtins

2013-07-07 Thread Ronald Oussoren

On 7 Jul, 2013, at 19:20, Larry Hastings  wrote:

> On 07/07/2013 01:42 PM, Ronald Oussoren wrote:
>> On 7 Jul, 2013, at 13:35, Larry Hastings 
>>  wrote:
>> 
>>> On 07/07/2013 07:25 AM, Ronald Oussoren wrote:
>>> 
>>>> Signature objects use a name in angled brackets to indicate that a 
>>>> parameter is positional only, for example "input()". That might be 
>>>> an alternative to adding a "/" in the argument list in pydoc's output.
>>>> 
>>>> 
>>> I wasn't aware that Signature objects currently had any support whatsoever 
>>> for positional-only parameters.  Yes, in theory they do, but in practice 
>>> they have never seen one, because positional-only parameters only occur in 
>>> builtins and Signature objects have no metadata for builtins.  (The very 
>>> problem Argument Clinic eventually hopes to solve!)
>>> 
>>> Can you cite an example of this, so I may examine it?
>>> 
>> I have a branch of PyObjC that uses this: 
>> <https://bitbucket.org/ronaldoussoren/pyobjc-3.0-unstable/overview>
>> . That branch isn't quite stable yet, but does add a __signature__ slot to 
>> objc.selector and objc.function (basicly methods of Cocoa classes and 
>> automaticly wrapped global functions), both of which only have 
>> positional-only arguments. With the patch for pydoc/inspect I mentioned 
>> earlier I can then generate somewhat useful documentation for Cocoa classes 
>> using pydoc.
>> 
>> A word of warning though: the PyObjC source code isn't the most 
>> approachable, the code that generates the Signature object is actually in 
>> python (callable_signature in pyobjc-core/Lib/objc/_callable_docstr.py)
>> 
> 
> Ah.  In other words, you have proposed it yourself in an external project.  I 
> thought you were saying this was something Python itself already did.

I wasn't clear enough in what I wrote.  The stdlib contains support for 
positional-only arguments in Signature objects (see Lib/inspect.py, line 1472, 
which says "_POSITIONAL_ONLY= _ParameterKind(0, 
name='POSITIONAL_ONLY')".  The __str__ of Parameter amongst other says:

if kind == _POSITIONAL_ONLY:
if formatted is None:
formatted = ''
formatted = '<{}>'.format(formatted)

That is, it adds angled brackets around the names of positional-only 
parameters.  I pointed to PyObjC as an example of code that actually creates 
Signature objects with positional-only arguments, as far as I know the stdlib 
never does this because the stdlib can only create signatures for plain python 
functions and those cannot have such arguments.

> In that case, I think I will stick with Guido's suggested syntax.  Consider 
> window.border in the curses module: eight positional-only parameters, each in 
> its own optional parameter group.  Adding sixteen angle-brackets to that 
> already unreadable morass will make it even worse.  But with "/" we add only 
> a single extra character, in an easy-to-find place (the end).

Using Guido's suggestion is fine by me, I agree that there is a clear risk of 
angle-bracket overload for functions with a lot of arguments. I do think that 
the __str__ for Signatures should be changed to match the convention. 

And to be clear: I'm looking forward to having Argument Clinic and 
__signature__ objects on built-in functions, "funcname(...)" in the output 
pydoc is somewhat annoying, especially for extensions where the author hasn't 
bothered to provide a docstring. That's one reason I wrote the __signature__ 
support in PyObjC in the first place (and the patch for pydoc to actually use 
the signature information)

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hooking into super() attribute resolution

2013-07-07 Thread Ronald Oussoren

On 7 Jul, 2013, at 17:17, Steve Dower  wrote:

> Could the same result be achieved by hooking the MRO that super uses and 
> returning a list of proxy objects?

What is the advantage over adding a hook to the class itself? That seems to be 
the right place to add such a hook, super already looks in the classes along 
the MRO and my proposal would add a formal interface for that instead of having 
super peek into the class __dict__. I have thought of using a custom mapping 
object for the tp_dict slot to intercept this, but that won't work because 
super assumes that tp_dict is an actual PyDictObject (and likely other parts of 
the interpreter do so as well).


> And then wouldn't you only really need a __getattribute__ that doesn't 
> recurse (__getlocalattribute__)? The end result may be conceptually simpler, 
> but you've thought through the edge cases better than I have. 

__getattribute_super__ already is a kind of __getlocalattribute__, the primairy 
difference being __getattribute_super__ is a staticmethod instead of an 
instance method. To be honest I'm not sure if a staticmethod is the right 
solution, I'm having a hard time to determine if this should be a class, 
instance or static method. 

Currently super(StartClass, x) basicly does (assuming x is an instance method):


def __getattribute__(self, name):
mro = type(x).mro()
idx = mro.index(StartClass)
while idx < len(mro):
   dct = mro[idx].__dict__
   try:
  result = dct[name]
  # deal with descriptors here
  return result

   except KeyError:
   continue
return object.__getattribute__(self, name)

With my proposal 'dct' would no longer be needed and 'result = dct[name]' would 
be 'mro[idx].__getattribute_super__(mro[idx], name, x, StartClass)' (I may have 
the last argument for the call to __getattribute_super__ wrong, but that's the 
idea). Given that the first argument of __get...super__ is the same as the 
object the method get getattr-ed from I guess the method should be a 
classmethod instead of an staticmethod. Changing that would be easy enough.

I'm still interested in feedback on the basic idea, I'd love to here that my 
proposal isn't necessary because there is already a way to get the behavior I'm 
looking for although that's nog going to happen ;-).

Ronald


> 
> (Apologies for the HTML top-post)

I don't mind.

PS. Does anyone know if the pep editors are away (conferences, holidays, ...)? 
I could just check in my proposal in the peps repository, but as this is my 
first PEP I'd prefer to follow the documented procedure and have someone that 
knows what he's doing look at the metadata before checking in.

> 
> Sent from my Windows Phone
> From: Ronald Oussoren
> Sent: ‎7/‎6/‎2013 0:47
> To: Ronald Oussoren
> Cc: python-dev@python.org Dev
> Subject: Re: [Python-Dev] Hooking into super() attribute resolution
> 
> I've updated the implementation in issue 18181 
> <http://bugs.python.org/issue18181> while adding some tests, and have updated 
> the proposal as well. 
> 
> The proposal has some open issues at the moment, most important of which is 
> the actual signature for the new special method; in particular I haven't been 
> able to decide if this should be an instance-, class- or static method. It is 
> a static method in the proposal and prototype, but I'm not convinced that 
> that is the right solution.
> 
> Ronald
> 
> 
> 
> 
> PEP: TODO
> Title: Hooking into super attribute resolution
> Version: $Revision$
> Last-Modified: $Date$
> Author: Ronald Oussoren 
> Status: Draft
> Type: Standards Track
> Content-Type: text/x-rst
> Created: 12-Jun-2013
> Post-History: 2-Jul-2013, ?
> 
> 
> Abstract
> 
> 
> In current python releases the attribute resolution of the `super class`_
> peeks in the ``__dict__`` attribute of classes on the MRO to look
> for attributes. This PEP introduces a hook that classes can use
> to override that behavior for specific classes.
> 
> 
> Rationale
> =
> 
> Peeking in the class ``__dict__`` works for regular classes, but can
> cause problems when a class dynamicly looks up attributes in a
> ``__getattribute__`` method.
> 
> The new hook makes it possible to introduce the same customization for
> attribute lookup through the `super class`_.
> 
> 
> The superclass attribute lookup hook
> 
> 
> In C code
> -
> 
> A new slot ``tp_getattro_super`` is added to the ``PyTypeObject`` struct. The
> ``tp_getattro`` slot for super will call this slot when it is not ``NULL``,
> and will raise an exception when it is not set (which shouldn't happen because

Re: [Python-Dev] Hooking into super() attribute resolution

2013-07-08 Thread Ronald Oussoren

On 8 Jul, 2013, at 17:19, Steve Dower  wrote:

> The only real advantage is a simpler signature and more easily explained use 
> (assuming the person you're explaining it to is familiar with metaclasses, so 
> most of the hard explaining has been done).

The signature is as complex as it is to be able to call descr.__get__ with the 
correct arguments. I ended up with the current signature when I added 
__getattribute_super__ to object and removed the tp_dict peeking code from 
super's tp_getattro.

A way to get a simpler interface again would be a method that returns an 
attribute *without* performing calls to descr.__get__. That could then be used 
for both __getattribute__ and super.__getattribute__, instead of peeking in a 
class' dictionary. I must admit that I haven't thought about the ramifactions 
of this (both functionally and performance wise).  This might end up being 
easier to explain: both normal attribute resolution and super's resolution 
would end up using the same mechanism, with the differences being that super 
doesn't begin resolution at the start of the mro and ignores the instance 
__dict__.  The disadvantage is introducing a new way to affect attribute 
resolution (do I use "__getattribute__" or this new method?). 

The new interface would be something like:

@classmethod
def __getlocalname__(cls, object, name):
pass

Or as you mentioned later as a __getlocalname__ method on the metaclass. The 
"object" argument wouldn't be necessary to reproduce current functionality, and 
isn't necessary for my usecase as well, but a hook for attribute resolution on 
an instance that doesn't have access to that instance feels wrong.

> 
> I'm still not sure that this isn't simply a bug in super. If the superclass's 
> metaclass provides a __getattr__ then it should probably use it and abandon 
> it's own MRO traversal.

I'd have to think about this, but on first glance this would mean a change in 
the semantics that a metaclass' __getattr__ currently has.

> 
> I still haven't thought the edge cases through, and it seems like there'd be 
> some with that change, so that's where __getattribute_super__ comes in - 
> super can call it without abandoning its MRO traversal.
> 
> AFAICT, the difference between that and __getlocalattribute__ is that the 
> latter would be implemented on a metaclass while the former takes extra 
> parameters. I think this functionality is advanced enough that requiring a 
> metaclass isn't unreasonable.

I'm not necessarily oppossed to a solution that requires using a metaclass, I 
already have metaclasses with custom metaclasses in PyObjC and this wouldn't 
add that much complexity to that :-)

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hooking into super() attribute resolution

2013-07-09 Thread Ronald Oussoren

On 9 Jul, 2013, at 1:21, Steve Dower  wrote:
>> 
> 
> Except that if it's on a metaclass, the 'instance' it has access to is cls. 
> The descriptor side of things is more interesting, but I see no reason why 
> super can't do that itself, since it knows the actual instance to call 
> __get__ with. (Presumably it already does this with the __dict__ lookup, 
> since that won't call __get__ either.)
> 
> Explaining the new method is easiest if the default implementation is 
> (literally):
> 
> def __getlocalname__(self, name):
>try:
>return self.__dict__[name]
>except KeyError:
>raise AttributeError(name)
> 
> which does not do any descriptor resolution (and is only a small step from 
> simply replacing __dict__ with a custom object, which is basically where we 
> started). The only change I've really suggested is making it an instance 
> method that can be implemented on a metaclass if you want it for class 
> members.

I like this idea and will experiment with implementing this later this week.  
The only thing I'm not sure about is how to indicate that the name could not be 
found, raising an exception could end up being to expensive if the 
__getlocalname__ hook gets used in object.__getattribute__ as well. I guess 
I'll have to run benchmarks to determine if this really is a problem.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Hooking into super() attribute resolution

2013-07-15 Thread Ronald Oussoren

On 9 Jul, 2013, at 1:21, Steve Dower  wrote:

> 
> Except that if it's on a metaclass, the 'instance' it has access to is cls. 
> The descriptor side of things is more interesting, but I see no reason why 
> super can't do that itself, since it knows the actual instance to call 
> __get__ with. (Presumably it already does this with the __dict__ lookup, 
> since that won't call __get__ either.)
> 
> Explaining the new method is easiest if the default implementation is 
> (literally):
> 
> def __getlocalname__(self, name):
>try:
>return self.__dict__[name]
>except KeyError:
>raise AttributeError(name)
> 
> which does not do any descriptor resolution (and is only a small step from 
> simply replacing __dict__ with a custom object, which is basically where we 
> started). The only change I've really suggested is making it an instance 
> method that can be implemented on a metaclass if you want it for class 
> members.

I've documented this (with a different name) in the current PEP draf 
() and am working on an 
implementation.

Using a lookup method on the metaclass has a nice side-effect: the same method 
could be used by object.__getattribute__ (aka PyObject_GenericGetAttr) as well 
which could simplify my primary usecase for this new API. 

There is a problem with that though: the type attribute cache in 
Object/typeobject.c, using that cache isn't valid if _PyType_Lookup calls a 
method instead of peeking in tp_dict (the cache cannot be in the default 
__getlocalname__ implementation because a primary usecase for the cache appears 
to be avoiding walking the MRO [1]). I haven't decided yet what to do about 
this, a number of options:

* Don't use __getlocalname__ in _PyType_Lookup

* Use __getlocalname__ as a fallback after peeking in tp_dict (that is, use it 
like __getattr__ instead of __getattribute__) and only cache the attribute when 
it is fetched from tp_dict.

* Don't add a default __getlocalname__ and disable the attribute cache for 
types with a metatype that does have this method (this might be non-trivial 
because a metatype might grow a __getlocalname__ slot after instances of the 
metatype have been created; that would be ugly code but is possbile)

Ronald

[1] "appears" because I haven't found documentation for the cache yet
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 447: hooking into super() attribute resolution

2013-07-15 Thread Ronald Oussoren
Hi,

I've posted a new update of my proposal to add a way to override the attribute 
resolution proces by super(). I've rewritten the PEP and implementation based 
on feedback by Steve Dower.

In the current edition of the proposal the hook is a method on the type, 
defined in a metaclass for types. A simple demo:

class meta (type):
def __locallookup__(cls, name):
return type.__locallookup__(cls, name.upper())

class S (metaclass=meta):
def m(self):
return 42

def M(self):
 return "fortytwo"

class SS (S):
def m(self):
return 21

def M(self):
   return "twentyone"

o = SS()
print("self", o.m())
print("super", super(SS, o).m())

The last line currently prints "super 42" and would print "super fortytwo" with 
my proposal.

A major open issue: the __locallookup__ method could also be used for normal 
attribute resolution, but that probably causes issues with attribute caching 
(see the PEP for details).  I haven't worked out yet if it is worthwhile to 
tweak the proposal to fix the caching issues (even though the proposal 
currently says that PyObject_GenericGetAttr will use the new method). Fixing 
the caching issue definitly would help in my primary use case by reducing code 
duplication, but might end up making the API unnecessarily complex.

Ronald



PEP: 447
Title: Hooking into super attribute resolution
Version: $Revision$
Last-Modified: $Date$
Author: Ronald Oussoren 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Jun-2013
Post-History: 2-Jul-2013, ?


Abstract


In current python releases the attribute resolution of the `super class`_
peeks in the ``__dict__`` attribute of classes on the MRO to look
for attributes. This PEP introduces a hook that classes can use
to override that behavior for specific classes.


Rationale
=

Peeking in the class ``__dict__`` works for regular classes, but can
cause problems when a class dynamicly looks up attributes in a
``__getattribute__`` method.

This new hook makes it possible to affect attribute lookup for both normal
attribute lookup and lookup through the `super class`_.


The superclass attribute lookup hook


Both ``super.__getattribute__`` and ``object.__getattribute__`` (or
`PyObject_GenericGetAttr`_ in C code) walk an object's MRO and peek in the
class' ``__dict__`` to look up attributes. A way to affect this lookup is
using a method on the meta class for the type, that by default looks up
the name in the class ``__dict__``.

In Python code
--

A meta type can define a method ``__locallookup__`` that is called during
attribute resolution by both ``super.__getattribute__`` and 
``object.__getattribute``::

class MetaType (type):

def __locallookup__(cls, name):
try:
return cls.__dict__[name]
except KeyError:
raise AttributeError(name) from None

The example method above is pseudo code for the implementation of this method on
`type`_ (the actual implementation is in C, and doesn't suffer from the 
recursion
problem in this example).

The ``__locallookup__`` method has as its arguments a class and the name of the 
attribute
that is looked up. It should return the value of the attribute without invoking 
descriptors,
or raise `AttributeError`_ when the name cannot be found.


In C code
-

A new slot ``tp_locallookup`` is added to the ``PyTypeObject`` struct, this slot
corresponds to the ``__locallookup__`` method on `type`_.

The slot has the following prototype::

PyObject* (*locallookupfunc)(PyTypeObject* cls, PyObject* name);

This method should lookup *name* in the namespace of *cls*, without looking at 
superclasses,
and should not invoke descriptors. The method returns ``NULL`` without setting 
an exception
when the *name* cannot be found, and returns a new reference otherwise (not a 
borrowed reference).


Usage of this hook
--

The new method will be defined for `type`_, and will peek in the class 
dictionary::

static PyObject*
type_locallookup(PyTypeObject* cls, PyObject* name)
{
PyObject* res;
if (!cls->tp_dict) {
return NULL;
}

res = PyDict_GetItem(cls, name);
Py_XINCREF(res);
return res;
}

The new method will be used by both `PyObject_GenericGetAttr`_ and
``super.__getattribute__`` instead of peeking in a type's ``tp_dict``.


Alternative proposals
-

``__getattribute_super__``
..

An earlier version of this PEP used the following static method on classes::

def __getattribute_super__(cls, name, object, owner): pass

This method performed name lookup as well as invoking descriptors and was 
necessarily
limited to working onl

Re: [Python-Dev] PEP 447: hooking into super() attribute resolution

2013-07-15 Thread Ronald Oussoren

On 15 Jul, 2013, at 18:49, Guido van Rossum  wrote:
> 
> 
>> A major open issue: the __locallookup__ method could also be used for normal 
>> attribute resolution, but that probably causes issues with attribute caching 
>> (see the PEP for details).  I haven't worked out yet if it is worthwhile to 
>> tweak the proposal to fix the caching issues (even though the proposal 
>> currently says that PyObject_GenericGetAttr will use the new method). Fixing 
>> the caching issue definitly would help in my primary use case by reducing 
>> code duplication, but might end up making the API unnecessarily complex.
> 
> Hm. It looks like the functionality you actually want to hook into is
> in _PyType_Lookup().

That's right. I didn't want to get too technical, but forgot to consider
who are reading this :-)

> 
> I think that in this case the PEP's acceptance will be highly
> correlated with your ability to produce an actual patch that (a)
> implements exactly the functionality you want (never mind whether it
> matches the exact API currently proposed), and (b) doesn't slow down
> classes that don't provide this hook.

I'd only reall need the functional change to super(), but I am working on
a patch that also changes _PyType_Lookup. I think I can avoid slowdown
by making the tp_locallookup optional and only "punishing" those classes
that use the new slot.  A minor complication is that I'll have to change
the interface of _PyType_Lookup, it currently returns a borrowed reference
and will return a new reference. That's just careful bookkeeping though.

> 
> Other than that, I think that it's a laudable attempt at generalizing,
> and I hope you solve the implementation conundrum.

I was pleasantly surprised in how the changed API was cleaner and applicable
to _PyType_Lookup as well. I guess that means I'm on the right path.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: Issue #18393: Remove use of deprecated API on OSX

2013-07-15 Thread Ronald Oussoren

On 15 Jul, 2013, at 18:43, Zachary Ware  wrote:

> On Mon, Jul 15, 2013 at 11:32 AM, ronald.oussoren
>  wrote:
>> http://hg.python.org/cpython/rev/ccbaf6762b54
>> changeset:   84634:ccbaf6762b54
>> user:Ronald Oussoren 
>> date:Mon Jul 15 18:32:09 2013 +0200
>> summary:
>>  Issue #18393: Remove use of deprecated API on OSX
>> 
>> The "Gestalt" function on OSX is deprecated (starting with OSX 10.8),
>> remove its usage from the stdlib. The patch removes a number of private
> 
> I believe this means that Lib/test/leakers/test_gestalt.py can be
> removed as well.

Interesting... test_gestalt.py cannot have worked in Py3k at all. I've removed 
the file.

Thanks,

   Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why does PEP 8 advise against explicit relative imports?

2013-07-16 Thread Ronald Oussoren

On 16 Jul, 2013, at 14:02, Thomas Wouters  wrote:

> On Tue, Jul 16, 2013 at 1:40 PM, Nick Coghlan  wrote:
> 
>> PEP 8 advises developers to use absolute imports rather than explicit
>> relative imports.
>> 
>> Why? Using absolute imports couple the internal implementation of a
>> package to its public name - you can't just change the top level
>> directory name any more, you have to go through and change all the
>> absolute imports as well. You also can't easily vendor a package that
>> uses absolute imports inside another project either, since all the
>> absolute imports will break.
>> 
> 
> The problem with relative imports (both explicit and implicit) is that it
> makes it less obvious when you are importing a module under the wrong name.
> If a package ends up in sys.path directly (for example, by executing
> something that lives inside it directly) then an explicit relative import
> directly in the package will fail, but an explicit relative import in a
> sub-package won't, and you can end up with the subtly confusing mess of
> multiple imports of the same .py file under different names.

That's only a problem for subpackages (that is, for example when
distutils.commands ends up being importable as commands), because explicit
relative imports (``from .other import attr``) don't work in toplevel modules
or scripts because the leading dot means "the current package".

I tend to use explicit relative imports in my code when that increases
readability by reducing unnecessary duplication. That is, readability
tends to suffer when you have a number of lines like with
"from package.module1 import name".

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dict __contains__ raises TypeError on unhashable input

2013-07-20 Thread Ronald Oussoren

On 20 Jul, 2013, at 1:47, Ethan Furman  wrote:

> While working on issue #18508 I stumbled across this:
> 
> Traceback (most recent call last):
> ...
>  File "/usr/local/lib/python3.4/enum.py", line 417, in __new__
>if value in cls._value2member_map:
> TypeError: unhashable type: 'list'
> 
> I'll wrap it in a try-except block, but I must admit I was surprised the 
> answer wasn't False.  After all, if the input is unhashable then obviously 
> it's not in the dict; furthermore, if I were to compare the number 5 with a 
> set() I would get False, not a TypeMismatch error, and dict keys are 
> basically done by equality, the hash is just (?) a speed-up.

Not quite, there are some objects that compare equal without both of them being 
hashable:

>>> frozenset([1,2]) == set([1,2])
True
>>> dct = { frozenset([1,2]): 1 }
>>> frozenset([1,2]) in dct
True
>>> set([1,2]) in dct
Traceback (most recent call last):
  File "", line 1, in 
TypeError: unhashable type: 'set'

It would be strange if the last test would return False instead of raising an 
error.

Ronald


> 
> --
> ~Ethan~
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building a Faster Python

2013-07-21 Thread Ronald Oussoren

On 22 Jul, 2013, at 3:01, Larry Hastings  wrote:

> On 07/21/2013 04:36 PM, Raymond Hettinger wrote:
>> Our current Mac OS X builds use GCC-4.2.
>> 
>> On Python2.7, I ran a comparison of gcc-4.2.1 builds
>> versus gcc-4.8.1 and found that the latter makes a much
>> faster Python.  PyBench2.0 shows the total running time
>> dropping from 5653ms to 4571ms.  The code is uniformly
>> better in just about every category.
>> 
> 
> I know that newer Microsoft compilers tend to drop support for older 
> operating systems, and often that means the binaries simply won't work on 
> those operating systems.  (Yes, technically it's the libraries that break the 
> older code.)
> 
> Are Apple's compilers the same?  Or do they tend to support all the old 
> versions of OS X?

The compiler's included with Xcode 4 cannot generate code for PowerPC machines, 
other than that it should be possible to generate code for older machines with 
the latest Xcode version.  I have a Python installation on my machine that I 
use to generate app distributions for a 10.5 machine, and AFAIK this would also 
work for deploying to 10.4.

The hard part is usually convincing autotools-using projects to not use APIs 
that are available on the build machine but not on the deployment machine.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building a Faster Python

2013-07-21 Thread Ronald Oussoren

On 22 Jul, 2013, at 1:36, Raymond Hettinger  wrote:

> Our current Mac OS X builds use GCC-4.2.
> 
> On Python2.7, I ran a comparison of gcc-4.2.1 builds
> versus gcc-4.8.1 and found that the latter makes a much
> faster Python.  PyBench2.0 shows the total running time
> dropping from 5653ms to 4571ms.  The code is uniformly
> better in just about every category.

Have you tried using clang from the latest devtools as well?

The OSX binary installers are build using the developer tools from
Apple, that happen to use gcc 4.2 on the machine used to build at
least the 32-bit binary installer (as that's the latest Xcode
that includes PPC support). 

FWIW I'd like to test with "clang -O4" as well (this performs
link-time optimization), I've seen some speedup with other projects 
and this might help with CPython's speed as well.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building a Faster Python

2013-07-21 Thread Ronald Oussoren

On 22 Jul, 2013, at 1:46, Ben Hoyt  wrote:

> > PyBench2.0 shows the total running time dropping from 5653ms to 4571ms.
> 
> That's very cool -- a significant improvement. Is this the kind of change 
> that could go into 2.7.6 binaries?

I'd prefer not to do that (but don't build the installers anymore).  The 
installers for OSX are build using the system compiler, using a different 
compiler makes it harder to build the installer.

I don't even know if upstream GCC could easily be used for the binary 
installers, does GCC 4.8 support building FAT binaries in its compiler driver?

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (2.7): Issue #18441: Make test.support.requires('gui') skip when it should.

2013-07-21 Thread Ronald Oussoren

On 22 Jul, 2013, at 7:35, Ned Deily  wrote:

> In article <51ecae41.5060...@udel.edu>, Terry Reedy  
> wrote:
>> 
> 
> This is exactly what Issue8716 was about.  The buildbot has no way of 
> knowing ahead of time whether a test will cause a crash or not.  Yes, Tk 
> should not crash but it does in some cases.  Speaking of #8716, that 
> reminds me that there is an open issue with it (documented in 
> Issue17496).  There is the start of a patch there to use a more general 
> approach to testing for a working Tk that might be applicable on all 
> platforms.

Issue17496 contains a patch that might fix the crashing problem
for OSX (and possibly other Unix-y platforms). I haven't been able to test 
this in an environment that's simular enough to the buildbots (that is, a 
system where the user running the test does not have GUI access).

That patch starts wish (the TCL shell) in a subprocess, and skips tests
when that doesn't work (on OSX that would be because Tk crashed).

Ronald


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building a Faster Python

2013-07-22 Thread Ronald Oussoren

On 22 Jul, 2013, at 9:32, Maciej Fijalkowski  wrote:

> On Mon, Jul 22, 2013 at 9:32 AM, Maciej Fijalkowski  wrote:
>> On Mon, Jul 22, 2013 at 8:15 AM, Antoine Pitrou  wrote:
>>> On Sun, 21 Jul 2013 16:36:35 -0700
>>> Raymond Hettinger  wrote:
 Our current Mac OS X builds use GCC-4.2.
 
 On Python2.7, I ran a comparison of gcc-4.2.1 builds
 versus gcc-4.8.1 and found that the latter makes a much
 faster Python.  PyBench2.0 shows the total running time
 dropping from 5653ms to 4571ms.  The code is uniformly
 better in just about every category.
>>> 
>>> You could try running the benchmarks suite to see what that gives:
>>> http://hg.python.org/benchmarks/
>>> 
>>> Regards
>>> 
>>> Antoine.
>> 
>> or pypy benchmark suite which is more comprehensive for python 2.7
>> (http://bitbucket.org/pypy/benchmarks)
> 
> Besides, is there any reason not to use clang by default on OS X?

The 32-bit installer contains binaries that work on PPC, that's why those are 
built using
an older version of Xcode. I'd have to check if that version of Xcode supports 
clang,
and if that version of clang is good enough.

The "intel" installer can, and should, be build with clang (and preferably with 
the most
recent Xcode release to ensure that the latest supported compiler is used). 

Note that the CPython configure script, and distutils, already use clang by
default if you a recent Xcode but that's primarily because gcc is llvm-gcc when 
you use
Xcode and llvm-gcc is broken (it miscompiles at least the unicode 
implementation in Python 3.3),
this overrides the default behavior of configure (using gcc whenever it is 
available unless
the user explictly overrides).

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Building a Faster Python

2013-07-22 Thread Ronald Oussoren

On 22 Jul, 2013, at 17:08, David Malcolm  wrote:

> On Mon, 2013-07-22 at 09:32 +0200, Maciej Fijalkowski wrote:
>> On Mon, Jul 22, 2013 at 9:32 AM, Maciej Fijalkowski  wrote:
>>> On Mon, Jul 22, 2013 at 8:15 AM, Antoine Pitrou  wrote:
 On Sun, 21 Jul 2013 16:36:35 -0700
 Raymond Hettinger  wrote:
> Our current Mac OS X builds use GCC-4.2.
> 
> On Python2.7, I ran a comparison of gcc-4.2.1 builds
> versus gcc-4.8.1 and found that the latter makes a much
> faster Python.  PyBench2.0 shows the total running time
> dropping from 5653ms to 4571ms.  The code is uniformly
> better in just about every category.
 
 You could try running the benchmarks suite to see what that gives:
 http://hg.python.org/benchmarks/
 
 Regards
 
 Antoine.
>>> 
>>> or pypy benchmark suite which is more comprehensive for python 2.7
>>> (http://bitbucket.org/pypy/benchmarks)
>> 
>> Besides, is there any reason not to use clang by default on OS X?
> 
> How did this thread go from:
>  "for OS X, GCC 4.8.1 gives you significantly faster machine code
>   than the system GCC 4.2.1"
> to
>  "let's just use clang"
> ?

Because we use the system compiler for building the official binary packages.

I'm not looking forward to bootstrapping GCC multiple times[*] just to be able
to build a slightly faster python.  And more so because you have to be very
careful when using a alternative compiler when building the installer, it is
very easy to end up with a build that others cannot use to build extension
because they don't have /Users/ronald/Tools/Compiler/gcc-4.8/bin/gcc.

> 
> (I should declare that I've been hacking on GCC for the last few months,
> so I have an interest in this)

It would still be interesting to know which compiler would generate the
fastest code for CPython.  Apple tends to claim that clang generates better
code than GCC, buit AFAIK they compare the latest clang with the latest
version of GCC that they used to ship, which is ancient by now.

Ronald

[*] multiple times due to fat binaries.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython: Issue #18520: Add a new PyStructSequence_InitType2() function, same than

2013-07-22 Thread Ronald Oussoren

On 23 Jul, 2013, at 2:01, Benjamin Peterson  wrote:

> We've cheerfully broken the ABI before on minor releases, though if
> it's part of the stable ABI, we can't be cavaliar about that anymore.

It is not part of the stable ABI. Given that the implementation of 
PyStructSequence_InitType() in the patch just calls PyStructSequence_InitType2()
and ignores the return value you could change the return value of ..InitType().

This may or may not break existing extensions using the function (depending on
platform ABI details, AFAIK this is not a problem on x86/x86_64), but reusing
extensions across python feature releases is not supported anyway.  There are
no problems when compiling code, most C compilers won't even warn about ignored
return values unless you explicitly ask for it.

Ronald

> 
> 2013/7/22 Victor Stinner :
>> "Add a new PyStructSequence_InitType2()"
>> 
>> I added a new function because I guess that it would break the API (and ABI)
>> to change the return type of a function in a minor release.
>> 
>> Tell me if you have a better name than PyStructSequence_InitType2() ;-)
>> 
>> "Ex" suffix is usually used when parameters are added. It is not the case
>> here.
>> 
>> Victor
>> 
>> Le 22 juil. 2013 23:59, "victor.stinner"  a
>> écrit :
>>> 
>>> http://hg.python.org/cpython/rev/fc718c177ee6
>>> changeset:   84793:fc718c177ee6
>>> user:Victor Stinner 
>>> date:Mon Jul 22 22:24:54 2013 +0200
>>> summary:
>>>  Issue #18520: Add a new PyStructSequence_InitType2() function, same than
>>> PyStructSequence_InitType() except that it has a return value (0 on
>>> success,
>>> -1 on error).
>>> 
>>> * PyStructSequence_InitType2() now raises MemoryError on memory
>>> allocation failure
>>> * Fix also some calls to PyDict_SetItemString(): handle error
>>> 
>>> files:
>>>  Include/pythonrun.h|   2 +-
>>>  Include/structseq.h|   2 +
>>>  Misc/NEWS  |   4 +++
>>>  Modules/_lsprof.c  |  10 ---
>>>  Modules/grpmodule.c|  11 ++--
>>>  Modules/posixmodule.c  |  24 --
>>>  Modules/pwdmodule.c|   5 ++-
>>>  Modules/resource.c |   9 --
>>>  Modules/signalmodule.c |   7 +++--
>>>  Modules/spwdmodule.c   |   8 --
>>>  Modules/timemodule.c   |   5 ++-
>>>  Objects/floatobject.c  |   9 --
>>>  Objects/longobject.c   |   6 +++-
>>>  Objects/structseq.c|  37 +
>>>  Python/pythonrun.c |   3 +-
>>>  Python/sysmodule.c |  23 -
>>>  Python/thread.c|   6 +++-
>>>  17 files changed, 117 insertions(+), 54 deletions(-)
>>> 
>>> 
>>> diff --git a/Include/pythonrun.h b/Include/pythonrun.h
>>> --- a/Include/pythonrun.h
>>> +++ b/Include/pythonrun.h
>>> @@ -197,7 +197,7 @@
>>> PyAPI_FUNC(void) _PyExc_Init(PyObject * bltinmod);
>>> PyAPI_FUNC(void) _PyImportHooks_Init(void);
>>> PyAPI_FUNC(int) _PyFrame_Init(void);
>>> -PyAPI_FUNC(void) _PyFloat_Init(void);
>>> +PyAPI_FUNC(int) _PyFloat_Init(void);
>>> PyAPI_FUNC(int) PyByteArray_Init(void);
>>> PyAPI_FUNC(void) _PyRandom_Init(void);
>>> #endif
>>> diff --git a/Include/structseq.h b/Include/structseq.h
>>> --- a/Include/structseq.h
>>> +++ b/Include/structseq.h
>>> @@ -24,6 +24,8 @@
>>> #ifndef Py_LIMITED_API
>>> PyAPI_FUNC(void) PyStructSequence_InitType(PyTypeObject *type,
>>>PyStructSequence_Desc *desc);
>>> +PyAPI_FUNC(int) PyStructSequence_InitType2(PyTypeObject *type,
>>> +   PyStructSequence_Desc *desc);
>>> #endif
>>> PyAPI_FUNC(PyTypeObject*) PyStructSequence_NewType(PyStructSequence_Desc
>>> *desc);
>>> 
>>> diff --git a/Misc/NEWS b/Misc/NEWS
>>> --- a/Misc/NEWS
>>> +++ b/Misc/NEWS
>>> @@ -10,6 +10,10 @@
>>> Core and Builtins
>>> -
>>> 
>>> +- Issue #18520: Add a new PyStructSequence_InitType2() function, same
>>> than
>>> +  PyStructSequence_InitType() except that it has a return value (0 on
>>> success,
>>> +  -1 on error).
>>> +
>>> - Issue #15905: Fix theoretical buffer overflow in handling of
>>> sys.argv[0],
>>>   prefix and exec_prefix if the operation system does not obey
>>> MAXPATHLEN.
>>> 
>>> diff --git a/Modules/_lsprof.c b/Modules/_lsprof.c
>>> --- a/Modules/_lsprof.c
>>> +++ b/Modules/_lsprof.c
>>> @@ -884,10 +884,12 @@
>>> PyDict_SetItemString(d, "Profiler", (PyObject *)&PyProfiler_Type);
>>> 
>>> if (!initialized) {
>>> -PyStructSequence_InitType(&StatsEntryType,
>>> -  &profiler_entry_desc);
>>> -PyStructSequence_InitType(&StatsSubEntryType,
>>> -  &profiler_subentry_desc);
>>> +if (PyStructSequence_InitType2(&StatsEntryType,
>>> +   &profiler_entry_desc) < 0)
>>> +return NULL;
>>> +if (PyStructSequence_InitType2(&StatsSubEntryType,
>>> +   &profiler_subentry_desc) < 0)
>>> +return NULL;
>>> }
>>> Py_INCREF((

Re: [Python-Dev] tuple index out of range

2013-07-23 Thread Ronald Oussoren

On 23 Jul, 2013, at 12:17, Nicholas Hart  wrote:

> Hi,
> 
> I am new to this list and to troubleshooting python.  I hope someone can help 
> me.  I am getting this tuple index out of range error while running a test 
> call to my python code.  Not sure what this error really means and was hoping 
> someone might shed some light on how to fix this.  Also was wondering why 
> much of my .py files are not getting compiled to .pyc upon first run.. is 
> this unusual or need I not worry?  
> 
> Running python 2.5.2 on fedora.

Nick,

This list is focussed on the development of Python, not on development with 
Python.  The python-list list is more appropriate for asking questions like 
yours.

Regards,

  Ronald

> 
> Thanks,
> Nick
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython (3.3): return NULL here

2013-07-23 Thread Ronald Oussoren

On 23 Jul, 2013, at 17:36, Christian Heimes  wrote:

> Am 23.07.2013 17:10, schrieb Benjamin Peterson:
>>> PyErr_SetFromErrno() already and always returns NULL. Or do you prefer
>>> to return NULL explicitly?
>> 
>> It might always return NULL, but the compiler sees (PyObject *)NULL
>> when this function returns dl_funcptr.
> 
> Oh, you are right. I must have missed the compiler warning. How about we
> turn type return and type assignment warnings into fatal errors?

That's probably possible with a '-Werror=' argument. But please consider
issue 18211 before unconditionally adding such a flag, as that issue mentions
new compiler flags also get used when compiling 3th-party extensions.

I guess there needs to be (yet) another CFLAGS_xxx variable in the Makefile that
gets added to $(CFLAGS) during the build of Python itself, but is ignored by
distutils and sysconfig.

Ronald

> 
> Christian
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] cpython (3.3): return NULL here

2013-07-24 Thread Ronald Oussoren

On 24 Jul, 2013, at 8:43, Gregory P. Smith  wrote:

> 
> On Tue, Jul 23, 2013 at 8:46 AM, Ronald Oussoren  
> wrote:
> 
> On 23 Jul, 2013, at 17:36, Christian Heimes  wrote:
> 
> > Am 23.07.2013 17:10, schrieb Benjamin Peterson:
> >>> PyErr_SetFromErrno() already and always returns NULL. Or do you prefer
> >>> to return NULL explicitly?
> >>
> >> It might always return NULL, but the compiler sees (PyObject *)NULL
> >> when this function returns dl_funcptr.
> >
> > Oh, you are right. I must have missed the compiler warning. How about we
> > turn type return and type assignment warnings into fatal errors?
> 
> That's probably possible with a '-Werror=' argument. But please consider
> issue 18211 before unconditionally adding such a flag, as that issue mentions
> new compiler flags also get used when compiling 3th-party extensions.
> 
> I guess there needs to be (yet) another CFLAGS_xxx variable in the Makefile 
> that
> gets added to $(CFLAGS) during the build of Python itself, but is ignored by
> distutils and sysconfig.
> 
> It seems fair to turn those on in 3.4 and require that third party extensions 
> clean up their code when porting from 3.3 to 3.4.

In this case its "just" code cleanup, the issue I filed (see above) is for 
another -Werror flag
that causes compile errors with some valid C99 code that isn't valid C89. 
That's good for
CPython itself because its source code is explicitly C89, but is not good when 
building 3th-party
extensions.

A proper fix requires tweaking the configure script, Makefile and distutils and 
that's not really
a fun prospect ;-)

Ronald

> 
> 
> Ronald
> 
> >
> > Christian
> >
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > http://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe: 
> > http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com
> 
> ___
> Python-checkins mailing list
> python-check...@python.org
> http://mail.python.org/mailman/listinfo/python-checkins
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Daemon creation code in the standard library

2013-07-26 Thread Ronald Oussoren

On 25 Jul, 2013, at 4:18, Ben Finney  wrote:

> Ben Finney  writes:
> 
>> Work continues on the PEP 3143-compatible ‘python-daemon’, porting it to
>> Python 3 and aiming for inclusion in the standard library.

At first glance the library appears to close all open files, with an option
to exclude some specific file descriptors (that is, you need to pass a list
of files that shouldn't be closed). 

That makes it a lot harder to do some initialization before daemonizing.
I prefer to perform at least some initialization early in program startup to
be able to give sensible error messages. I've had too many initscripts that
claimed to have started a daemon sucessfully, only to have that daemon stop
right away because it noticed a problem right after it detached itself.

Ronald

> 
> At PyPI http://pypi.python.org/pypi/python-daemon/>, and
> development co-ordinated at Alioth
> https://alioth.debian.org/projects/python-daemon/>.
> 
>> Interested parties are invited to join us on the discussion forums
> 
> The correct link for the ‘python-daemon-devel’ forum is
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/python-daemon-devel>.
> For announcements only, we have 
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/python-daemon-announce>.
> 
> -- 
> \“This sentence contradicts itself — no actually it doesn't.” |
>  `\   —Douglas Hofstadter |
> _o__)  |
> Ben Finney
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Official github mirror for CPython?

2013-07-26 Thread Ronald Oussoren

On 26 Jul, 2013, at 9:50, Antoine Pitrou  wrote:

> Le Fri, 26 Jul 2013 09:31:50 +1000,
> Nick Coghlan  a écrit :
>> 
>> To be honest, if people are going to spend time tinkering with our VCS
>> infrastructure, one of the most interesting things we could do is
>> explore what would be involved in setting up RhodeCode on
>> hg.python.org :)
>> 
>> (For those that haven't seen it, RhodeCode seems broadly comparable to
>> BitBucket feature wise, but because of the way it is licensed, the
>> source code is freely available to all, and running your own instance
>> is free-as-in-beer for non-profits and open source projects).
> 
> By "freely available", do you mean actual open source / free software?

It appears to be GPLv3, with a for-pay enterprise edition. The latter is
free-as-in-beer for non-profits and open source projects.

Ronald

(See 
)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 446: cloexec flag on os.open

2013-07-29 Thread Ronald Oussoren
Victor,

PEP 446 mentions that a cloexec flag gets added to os.open. This API already 
has a way to specify this: the O_CLOEXEC bit in the flags argument. A new 
cloexec parameter is nicely consistent with the other APIs, but introcudes a 
second way to set that flag.

What will the following calls do?:

os.open(path, os.O_RDONLY|os.O_CLOEXEC, cloexec=False)
os.open(path, os.O_RDONLY, cloexec=True)

The PEP doesn't specify this, but the implementation for PEP 443 in issue 17036 
basicly ignores the cloexec argument in the first call and adds O_CLOEXEC in 
the second call. That can lead to confusing behavior when the flags argument to 
os.open is passed from elsewhere (e.g. a wrapper around os.open that passes in 
arguments and just overrides the cloexec argument).

It might be better to just drop the cloexec flag to os.open and make 
os.O_CLOEXEC an alias for os.O_NOINHERIT on Windows.

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 447: add type.__locallookup__

2013-07-29 Thread Ronald Oussoren
Hi,

This PEP proposed to add a __locallookup__ slot to type objects,
which is used by _PyType_Lookup and super_getattro instead of peeking
in the tp_dict of classes.  The PEP text explains why this is needed.

Differences with the previous version:

* Better explanation of why this is a useful addition

* type.__locallookup__ is no longer optional.

* I've added benchmarking results using pybench.
  (using the patch attached to issue 18181)

Ronald




PEP: 447
Title: Add __locallookup__ method to metaclass
Version: $Revision$
Last-Modified: $Date$
Author: Ronald Oussoren 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 12-Jun-2013
Post-History: 2-Jul-2013, 15-Jul-2013, 29-Jul-2013


Abstract


Currently ``object.__getattribute__`` and ``super.__getattribute__`` peek
in the ``__dict__`` of classes on the MRO for a class when looking for
an attribute. This PEP adds an optional ``__locallookup__`` method to
a metaclass that can be used to override this behavior.

Rationale
=

It is currently not possible to influence how the `super class`_ looks
up attributes (that is, ``super.__getattribute__`` unconditionally
peeks in the class ``__dict__``), and that can be problematic for
dynamic classes that can grow new methods on demand.

The ``__locallookup__`` method makes it possible to dynamicly add
attributes even when looking them up using the `super class`_.

The new method affects ``object.__getattribute__`` (and
`PyObject_GenericGetAttr`_) as well for consistency.

Background
--

The current behavior of ``super.__getattribute__`` causes problems for
classes that are dynamic proxies for other (non-Python) classes or types,
an example of which is `PyObjC`_. PyObjC creates a Python class for every
class in the Objective-C runtime, and looks up methods in the Objective-C
runtime when they are used. This works fine for normal access, but doesn't
work for access with ``super`` objects. Because of this PyObjC currently
includes a custom ``super`` that must be used with its classes.

The API in this PEP makes it possible to remove the custom ``super`` and
simplifies the implementation because the custom lookup behavior can be
added in a central location.


The superclass attribute lookup hook


Both ``super.__getattribute__`` and ``object.__getattribute__`` (or
`PyObject_GenericGetAttr`_ in C code) walk an object's MRO and peek in the
class' ``__dict__`` to look up attributes. A way to affect this lookup is
using a method on the meta class for the type, that by default looks up
the name in the class ``__dict__``.

In Python code
--

A meta type can define a method ``__locallookup__`` that is called during
attribute resolution by both ``super.__getattribute__`` and 
``object.__getattribute``::

class MetaType(type):
def __locallookup__(cls, name):
try:
return cls.__dict__[name]
except KeyError:
raise AttributeError(name) from None

The ``__locallookup__`` method has as its arguments a class and the name of the 
attribute
that is looked up. It should return the value of the attribute without invoking 
descriptors,
or raise `AttributeError`_ when the name cannot be found.

The `type`_ class provides a default implementation for ``__locallookup__``, 
that
looks up the name in the class dictionary.

Example usage
.

The code below implements a silly metaclass that redirects attribute lookup to 
uppercase
versions of names::

class UpperCaseAccess (type):
def __locallookup__(cls, name):
return cls.__dict__[name.upper()]

class SillyObject (metaclass=UpperCaseAccess):
def m(self):
return 42

def M(self):
return "fourtytwo"

obj = SillyObject()
assert obj.m() == "fortytwo"


In C code
-

A new slot ``tp_locallookup`` is added to the ``PyTypeObject`` struct, this slot
corresponds to the ``__locallookup__`` method on `type`_.

The slot has the following prototype::

PyObject* (*locallookupfunc)(PyTypeObject* cls, PyObject* name);

This method should lookup *name* in the namespace of *cls*, without looking at 
superclasses,
and should not invoke descriptors. The method returns ``NULL`` without setting 
an exception
when the *name* cannot be found, and returns a new reference otherwise (not a 
borrowed reference).

Use of this hook by the interpreter
---

The new method is required for metatypes and as such is defined on `type_`.  
Both
``super.__getattribute__`` and 
``object.__getattribute__``/`PyObject_GenericGetAttr`_
(through ``_PyType_Lookup``) use the this ``__locallookup__`` method when 
walking
the MRO.

Other changes to the implementation
---

The change for `PyObject_GenericGetAttr`_ will be done by changing the private 
function
``_PyType_Lookup``. Thi

Re: [Python-Dev] PEP 447: add type.__locallookup__

2013-07-29 Thread Ronald Oussoren

On 29 Jul, 2013, at 14:58, Antoine Pitrou  wrote:

> 
> Hi,
> 
> Le Mon, 29 Jul 2013 14:49:18 +0200,
> Ronald Oussoren  a écrit :
>> Hi,
>> 
>> This PEP proposed to add a __locallookup__ slot to type objects,
>> which is used by _PyType_Lookup and super_getattro instead of peeking
>> in the tp_dict of classes.  The PEP text explains why this is needed.
>> 
>> Differences with the previous version:
>> 
>> * Better explanation of why this is a useful addition
>> 
>> * type.__locallookup__ is no longer optional.
>> 
>> * I've added benchmarking results using pybench.
>>  (using the patch attached to issue 18181)
> 
> Could you please run the whole benchmark suite?
> http://hg.python.org/benchmarks/

Sure.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447: add type.__locallookup__

2013-07-29 Thread Ronald Oussoren

On 29 Jul, 2013, at 15:07, Ronald Oussoren  wrote:

> 
> On 29 Jul, 2013, at 14:58, Antoine Pitrou  wrote:
> 
>> 
>> Hi,
>> 
>> Le Mon, 29 Jul 2013 14:49:18 +0200,
>> Ronald Oussoren  a écrit :
>>> Hi,
>>> 
>>> This PEP proposed to add a __locallookup__ slot to type objects,
>>> which is used by _PyType_Lookup and super_getattro instead of peeking
>>> in the tp_dict of classes.  The PEP text explains why this is needed.
>>> 
>>> Differences with the previous version:
>>> 
>>> * Better explanation of why this is a useful addition
>>> 
>>> * type.__locallookup__ is no longer optional.
>>> 
>>> * I've added benchmarking results using pybench.
>>> (using the patch attached to issue 18181)
>> 
>> Could you please run the whole benchmark suite?
>> http://hg.python.org/benchmarks/
> 
> Sure.

That's harder than I had expected, when I use the "make_perf3.sh" to create
a python 3 compatible version of the benchmark suite and then run the suite
it craps out because it cannnot find "spitfire", which isn't translated
(as are several other benchmarks).

I'll have to investigate why the suite doesn't work.

Ronald

> 
> Ronald
> 
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] to rename or not...

2013-07-30 Thread Ronald Oussoren
Hi,

The relevant issue for this question is 

Plistlib contains object serialization logic that is in spirit simular to json 
and pickle, but has a different output format and different limitations. The 
functions in the module don't reflect that though, they are "readPlist" and 
"writePlist" instead of "load" and "dump".

While working on the issue I noticed something uglier than that: plistlib in 
py3k inherits a design decision that was necessary in py2 and feels decidedly 
odd in py3k. It represents binary data objects of type plistlib.Data instead of 
bytes. Those objects have an attribute with the actual data. The distinction 
was necessary in Python 2 to make it possible to keep binary data and strings 
apart, but is no longer necessary in Python 3.

Because of this I'd like to introduce a new API in plistlib that fixes both 
problems. In particular:

* Add 'load', 'loads', 'dump' and 'dumps', those use "bytes" for binary data by 
default

* Keep and deprecate "readPlist", "writePlist" and the their string 
equivalents, those still use Data objects (and call the new API to do the 
actual work).

I'd like some feedback on this change. On the one hand the new APIs make it 
possible to clean up the API of plistlib, on the other hand this is a big API 
change.

Ronald

P.S. The issue itself is about adding support for binary plist files, I got a 
bit carried away while testing and refactoring the original patch :-(
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 447: add type.__locallookup__

2013-07-30 Thread Ronald Oussoren

On 29 Jul, 2013, at 14:49, Ronald Oussoren  wrote:

> Hi,
> 
> This PEP proposed to add a __locallookup__ slot to type objects,
> which is used by _PyType_Lookup and super_getattro instead of peeking
> in the tp_dict of classes.  The PEP text explains why this is needed.
> 
> Differences with the previous version:
> 
> * Better explanation of why this is a useful addition
> 
> * type.__locallookup__ is no longer optional.
> 
> * I've added benchmarking results using pybench.
>  (using the patch attached to issue 18181)
> 
> Ronald

And something I forgot to ask: is anyone willing to be the BDFL-Delegate for
PEP 447?

Ronald
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   >