[Python-Dev] PEP 442 clarification for type hierarchies

2013-08-04 Thread Stefan Behnel
Hi,

I'm currently catching up on PEP 442, which managed to fly completely below
my radar so far. It's a really helpful change that could end up fixing a
major usability problem that Cython was suffering from: user provided
deallocation code now has a safe execution environment (well, at least in
Py3.4+). That makes Cython a prime candidate for testing this, and I've
just started to migrate the implementation.

One thing that I found to be missing from the PEP is inheritance handling.
The current implementation doesn't seem to care about base types at all, so
it appears to be the responsibility of the type to call its super type
finalisation function. Is that really intended? Couldn't the super type
call chain be made a part of the protocol?

Another bit is the exception handling. According to the documentation,
tp_finalize() is supposed to first save the current exception state, then
do the cleanup, then call WriteUnraisable() if necessary, then restore the
exception state.

http://docs.python.org/3.4/c-api/typeobj.html#PyTypeObject.tp_finalize

Is there a reason why this is left to the user implementation, rather than
doing it generically right in PyObject_CallFinalizer() ? That would also
make it more efficient to call through the super type hierarchy, I guess. I
don't see a need to repeat this exception state swapping at each level.

So, essentially, I'm wondering whether PyObject_CallFinalizer() couldn't
just set up the execution environment and then call all finalisers of the
type hierarchy in bottom-up order.

Stefan

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allowing to run certain regression tests in subprocesses

2013-08-04 Thread R. David Murray
On Sat, 03 Aug 2013 19:04:21 -0700, Eli Bendersky  wrote:
> On Sat, Aug 3, 2013 at 6:57 PM, R. David Murray  wrote:
> > On Sat, 03 Aug 2013 16:47:37 -0700, Eli Bendersky  wrote:
> >> On Sat, Aug 3, 2013 at 4:36 PM, Eli Bendersky  wrote:
> >> > Hi All,
> >> >
> >> > Today the issue of cross-test global env dependencies showed its ugly
> >> > head again for me. I recall a previous discussion
> >> > (http://mail.python.org/pipermail/python-dev/2013-January/123409.html)
> >> > but there were many more over the years.
> >> >
> >> > The core problem is that some tests modify the global env
> >> > (particularly importing modules) and this sometimes has adverse
> >> > effects on other tests, because test.regrtest runs all tests in a
> >> > single process. In the discussion linked above, the particular culprit
> >> > test__all__ was judged as a candidate to be moved to a subprocess.
> >> >
> >> > I want to propose adding a capability to our test harness to run
> >> > specific tests in subprocesses. Each test will have some simple way of
> >> > asking to be run in a subprocess, and regrtest will concur (even when
> >> > running -j1). test__all__ can go there, and it can help solve other
> >> > problems.
> >> >
> >> > My particular case is trying to write a test for
> >> > http://bugs.python.org/issue14988 - wherein I have to simulate a
> >> > situation of non-existent pyexpat. It's not hard to write a test for
> >> > it, but when run in tandem with other tests (where C extensions loaded
> >> > pyexpat) it becomes seemingly impossible to set up. This should not be
> >> > the case - there's nothing wrong with wanting to simulate this case,
> >> > and there's nothing wrong in Python and the stdlib - it's purely an
> >> > artifact of the way our regression suite works.
> >> >
> >> > Thoughts?
> >> >
> >> > Eli
> >>
> >> FWIW the problem is also discussed here:
> >> http://bugs.python.org/issue1674555, w.r.t. test_site
> >
> > Can't you just launch a subprocess from the test itself using 
> > script_helpers?
> >
> 
> I can, but such launching will be necessarily duplicated across all
> tests that need this functionality (test_site, test___all__, etc).
> Since regrtest already has functionality for launching whole
> test-suites in subprocesses, it makes sense to reuse it, no?

In the case of test_site and test___all___ we are talking about running
the entire test file in a subprocess.  It sounds like you are only
talking about running one individual test function in a subprocess,
for which using script_helpers seems the more natural solution.

--David
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allowing to run certain regression tests in subprocesses

2013-08-04 Thread Eli Bendersky
On Sun, Aug 4, 2013 at 5:44 AM, R. David Murray  wrote:
> On Sat, 03 Aug 2013 19:04:21 -0700, Eli Bendersky  wrote:
>> On Sat, Aug 3, 2013 at 6:57 PM, R. David Murray  
>> wrote:
>> > On Sat, 03 Aug 2013 16:47:37 -0700, Eli Bendersky  wrote:
>> >> On Sat, Aug 3, 2013 at 4:36 PM, Eli Bendersky  wrote:
>> >> > Hi All,
>> >> >
>> >> > Today the issue of cross-test global env dependencies showed its ugly
>> >> > head again for me. I recall a previous discussion
>> >> > (http://mail.python.org/pipermail/python-dev/2013-January/123409.html)
>> >> > but there were many more over the years.
>> >> >
>> >> > The core problem is that some tests modify the global env
>> >> > (particularly importing modules) and this sometimes has adverse
>> >> > effects on other tests, because test.regrtest runs all tests in a
>> >> > single process. In the discussion linked above, the particular culprit
>> >> > test__all__ was judged as a candidate to be moved to a subprocess.
>> >> >
>> >> > I want to propose adding a capability to our test harness to run
>> >> > specific tests in subprocesses. Each test will have some simple way of
>> >> > asking to be run in a subprocess, and regrtest will concur (even when
>> >> > running -j1). test__all__ can go there, and it can help solve other
>> >> > problems.
>> >> >
>> >> > My particular case is trying to write a test for
>> >> > http://bugs.python.org/issue14988 - wherein I have to simulate a
>> >> > situation of non-existent pyexpat. It's not hard to write a test for
>> >> > it, but when run in tandem with other tests (where C extensions loaded
>> >> > pyexpat) it becomes seemingly impossible to set up. This should not be
>> >> > the case - there's nothing wrong with wanting to simulate this case,
>> >> > and there's nothing wrong in Python and the stdlib - it's purely an
>> >> > artifact of the way our regression suite works.
>> >> >
>> >> > Thoughts?
>> >> >
>> >> > Eli
>> >>
>> >> FWIW the problem is also discussed here:
>> >> http://bugs.python.org/issue1674555, w.r.t. test_site
>> >
>> > Can't you just launch a subprocess from the test itself using 
>> > script_helpers?
>> >
>>
>> I can, but such launching will be necessarily duplicated across all
>> tests that need this functionality (test_site, test___all__, etc).
>> Since regrtest already has functionality for launching whole
>> test-suites in subprocesses, it makes sense to reuse it, no?
>
> In the case of test_site and test___all___ we are talking about running
> the entire test file in a subprocess.  It sounds like you are only
> talking about running one individual test function in a subprocess,
> for which using script_helpers seems the more natural solution.

I was actually planning to split this into a separate test file to
make the process separation more apparent. And regardless, the
question sent to the list is about the generic approach, not my
particular problem. Issues of folks struggling with inter-test
dependencies through global state modification come up very often.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 442 clarification for type hierarchies

2013-08-04 Thread Stefan Behnel
Stefan Behnel, 04.08.2013 09:23:
> I'm currently catching up on PEP 442, which managed to fly completely below
> my radar so far. It's a really helpful change that could end up fixing a
> major usability problem that Cython was suffering from: user provided
> deallocation code now has a safe execution environment (well, at least in
> Py3.4+). That makes Cython a prime candidate for testing this, and I've
> just started to migrate the implementation.
> 
> One thing that I found to be missing from the PEP is inheritance handling.
> The current implementation doesn't seem to care about base types at all, so
> it appears to be the responsibility of the type to call its super type
> finalisation function. Is that really intended? Couldn't the super type
> call chain be made a part of the protocol?
> 
> Another bit is the exception handling. According to the documentation,
> tp_finalize() is supposed to first save the current exception state, then
> do the cleanup, then call WriteUnraisable() if necessary, then restore the
> exception state.
> 
> http://docs.python.org/3.4/c-api/typeobj.html#PyTypeObject.tp_finalize
> 
> Is there a reason why this is left to the user implementation, rather than
> doing it generically right in PyObject_CallFinalizer() ? That would also
> make it more efficient to call through the super type hierarchy, I guess. I
> don't see a need to repeat this exception state swapping at each level.
> 
> So, essentially, I'm wondering whether PyObject_CallFinalizer() couldn't
> just set up the execution environment and then call all finalisers of the
> type hierarchy in bottom-up order.

I continued my implementation and found that calling up the base type
hierarchy is essentially the same code as calling up the hierarchy for
tp_dealloc(), so that was easy to adapt to in Cython and is also more
efficient than a generic loop (because it can usually benefit from
inlining). So I'm personally ok with leaving the super type calling code to
the user side, even though manual implementers may not be entirely happy.

I think it should get explicitly documented how subtypes should deal with a
tp_finalize() in (one of the) super types. It's not entirely trivial
because the tp_finalize slot is not guaranteed to be filled for a super
type IIUC, as opposed to tp_dealloc. I assume the recursive invariant that
PyType_Ready() copies it would still hold, though.

For reference, my initial implementation in Cython is here:

https://github.com/cython/cython/commit/6fdb49bd84192089c7e742d46594b59ad6431b31

I'm currently running Cython's tests suite against it to see if everything
broke along the way. Will report back as soon as I got everything working.

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allowing to run certain regression tests in subprocesses

2013-08-04 Thread Eli Bendersky
On Sat, Aug 3, 2013 at 7:08 PM, Nick Coghlan  wrote:
>
> On 4 Aug 2013 12:03, "Eli Bendersky"  wrote:
>>
>> On Sat, Aug 3, 2013 at 6:59 PM, Nick Coghlan  wrote:
>> >
>> > On 4 Aug 2013 09:43, "Eli Bendersky"  wrote:
>> >>
>> >> Hi All,
>> >>
>> >> Today the issue of cross-test global env dependencies showed its ugly
>> >> head again for me. I recall a previous discussion
>> >> (http://mail.python.org/pipermail/python-dev/2013-January/123409.html)
>> >> but there were many more over the years.
>> >>
>> >> The core problem is that some tests modify the global env
>> >> (particularly importing modules) and this sometimes has adverse
>> >> effects on other tests, because test.regrtest runs all tests in a
>> >> single process. In the discussion linked above, the particular culprit
>> >> test__all__ was judged as a candidate to be moved to a subprocess.
>> >>
>> >> I want to propose adding a capability to our test harness to run
>> >> specific tests in subprocesses. Each test will have some simple way of
>> >> asking to be run in a subprocess, and regrtest will concur (even when
>> >> running -j1). test__all__ can go there, and it can help solve other
>> >> problems.
>> >>
>> >> My particular case is trying to write a test for
>> >> http://bugs.python.org/issue14988 - wherein I have to simulate a
>> >> situation of non-existent pyexpat. It's not hard to write a test for
>> >> it, but when run in tandem with other tests (where C extensions loaded
>> >> pyexpat) it becomes seemingly impossible to set up. This should not be
>> >> the case - there's nothing wrong with wanting to simulate this case,
>> >> and there's nothing wrong in Python and the stdlib - it's purely an
>> >> artifact of the way our regression suite works.
>> >
>> > I'm not actively opposed to the suggested idea, but is there a specific
>> > reason "test.support.import_fresh_module" doesn't work for this test?
>>
>> I'm not an expert on this topic, but I believe there's a problem
>> unloading code that was loaded by C extensions. import_fresh_module is
>> thus powerless here (which also appears to be the case empirically).
>
> Sure, it's just unusual to have a case where "importing is blocked by adding
> None to sys.modules" differs from "not actually available", so I'd like to
> understand the situation better.

I must admit I'm confused by the behavior of import_fresh_module too.

Snippet #1 raises the expected ImportError:

sys.modules['pyexpat'] = None
import _elementtree

However, snippet #2 succeeds importing:

ET = import_fresh_module('_elementtree', blocked=['pyexpat'])
print(ET)

I believe this happens because import_fresh_module does an import of
the 'name' it's given before even looking at the blocked list. Then,
it assigns None to sys.modules for the blocked names and re-imports
the module. So in essence, this is somewhat equivalent to snippet #3:

modname = '_elementtree'
__import__(modname)
del sys.modules[modname]
for m in sys.modules:
if modname == m or m.startswith(modname + '.'):
del sys.modules[m]
sys.modules['pyexpat'] = None
ET = importlib.import_module(modname)
print(ET)

Which also succeeds.

I fear I'm not familiar enough with the logic of importing to
understand what's going on, but it has been my impression that this
problem is occasionally encountered with import_fresh_module and C
code that imports stuff (the import of pyexpat is done by C code in
this case).

CC'ing Brett.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.4.0a1

2013-08-04 Thread Eli Bendersky
On Sat, Aug 3, 2013 at 11:48 PM, Larry Hastings  wrote:
> On 08/03/2013 11:22 PM, Larry Hastings wrote:
>
> * PEP 435, a standardized "enum" module
> * PEP 442, improved semantics for object finalization
> * PEP 443, adding single-dispatch generic functions to the standard library
> * PEP 445, a new C API for implementing custom memory allocators
>
>
> Whoops, looks like I missed a couple here.  I was in a hurry and just went
> off what I could find in Misc/NEWS.  I'll have a more complete list in the
> release schedule PEP in a minute, and in the announcements for alpha 2.
>
> If you want to make sure your PEP is mentioned next time, by all means email
> me and rattle my cage.
>

Larry, if there are other things you're going to add, update the web
page http://www.python.org/download/releases/3.4.0/ as well - it's the
one being linked in the inter-webs now.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allowing to run certain regression tests in subprocesses

2013-08-04 Thread Nick Coghlan
On 4 August 2013 23:40, Eli Bendersky  wrote:
> On Sat, Aug 3, 2013 at 7:08 PM, Nick Coghlan  wrote:
>> Sure, it's just unusual to have a case where "importing is blocked by adding
>> None to sys.modules" differs from "not actually available", so I'd like to
>> understand the situation better.
>
> I must admit I'm confused by the behavior of import_fresh_module too.
>
> Snippet #1 raises the expected ImportError:
>
> sys.modules['pyexpat'] = None
> import _elementtree
>
> However, snippet #2 succeeds importing:
>
> ET = import_fresh_module('_elementtree', blocked=['pyexpat'])
> print(ET)

/me goes and looks

That function was much simpler when it was first created :P

Still, I'm fairly confident the complexity of that dance isn't
relevant to the problem you're seeing.

> I believe this happens because import_fresh_module does an import of
> the 'name' it's given before even looking at the blocked list. Then,
> it assigns None to sys.modules for the blocked names and re-imports
> the module. So in essence, this is somewhat equivalent to snippet #3:
>
> modname = '_elementtree'
> __import__(modname)
> del sys.modules[modname]
> for m in sys.modules:
> if modname == m or m.startswith(modname + '.'):
> del sys.modules[m]
> sys.modules['pyexpat'] = None
> ET = importlib.import_module(modname)
> print(ET)
>
> Which also succeeds.
>
> I fear I'm not familiar enough with the logic of importing to
> understand what's going on, but it has been my impression that this
> problem is occasionally encountered with import_fresh_module and C
> code that imports stuff (the import of pyexpat is done by C code in
> this case).

I had missed it was a C module doing the import. Looking into the
_elementtree.c source, the problem in this case is the fact that a
shared library that doesn't use PEP 3121 style per-module state is
only loaded and initialised once, so reimporting it gets the same
module back (from the extension loading cache), even if the Python
level reference has been removed from sys.modules. Non PEP 3121 C
extension modules thus don't work properly with
test.support.import_fresh_module (as there's an extra level of caching
involved that *can't* be cleared from Python, because it would break
things).

To fix this, _elementree would need to move the pyexpat C API pointer
to per-module state, rather than using a static variable (see
http://docs.python.org/3/c-api/module.html#initializing-c-modules).
With per-module state defined, the import machine should rerun the
init function when the fresh import happens, thus creating a new copy
of the module. However, this isn't an entirely trivial change for
_elementree, since:

1. Getting from the XMLParser instance back to the module where it was
defined in order to retrieve the capsule pointer via
PyModule_GetState() isn't entirely trivial in C. You'd likely do it
once in the init method, store the result in an XMLParser attribute,
and then tweak the EXPAT() using functions to include an appropriate
local variable definition at the start of the method implementation.

2. expat_set_error would need to be updated to accept the pyexpat
capsule pointer as a function parameter

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 442 clarification for type hierarchies

2013-08-04 Thread Stefan Behnel
Stefan Behnel, 04.08.2013 15:24:
> Stefan Behnel, 04.08.2013 09:23:
>> I'm currently catching up on PEP 442, which managed to fly completely below
>> my radar so far. It's a really helpful change that could end up fixing a
>> major usability problem that Cython was suffering from: user provided
>> deallocation code now has a safe execution environment (well, at least in
>> Py3.4+). That makes Cython a prime candidate for testing this, and I've
>> just started to migrate the implementation.
>>
>> One thing that I found to be missing from the PEP is inheritance handling.
>> The current implementation doesn't seem to care about base types at all, so
>> it appears to be the responsibility of the type to call its super type
>> finalisation function. Is that really intended? Couldn't the super type
>> call chain be made a part of the protocol?
>>
>> Another bit is the exception handling. According to the documentation,
>> tp_finalize() is supposed to first save the current exception state, then
>> do the cleanup, then call WriteUnraisable() if necessary, then restore the
>> exception state.
>>
>> http://docs.python.org/3.4/c-api/typeobj.html#PyTypeObject.tp_finalize
>>
>> Is there a reason why this is left to the user implementation, rather than
>> doing it generically right in PyObject_CallFinalizer() ? That would also
>> make it more efficient to call through the super type hierarchy, I guess. I
>> don't see a need to repeat this exception state swapping at each level.
>>
>> So, essentially, I'm wondering whether PyObject_CallFinalizer() couldn't
>> just set up the execution environment and then call all finalisers of the
>> type hierarchy in bottom-up order.
> 
> I continued my implementation and found that calling up the base type
> hierarchy is essentially the same code as calling up the hierarchy for
> tp_dealloc(), so that was easy to adapt to in Cython and is also more
> efficient than a generic loop (because it can usually benefit from
> inlining). So I'm personally ok with leaving the super type calling code to
> the user side, even though manual implementers may not be entirely happy.
> 
> I think it should get explicitly documented how subtypes should deal with a
> tp_finalize() in (one of the) super types. It's not entirely trivial
> because the tp_finalize slot is not guaranteed to be filled for a super
> type IIUC, as opposed to tp_dealloc. I assume the recursive invariant that
> PyType_Ready() copies it would still hold, though.

Hmm, it seems to me by now that the only safe way of handling this is to
let each tp_dealloc() level in the hierarchy call tp_finalize() through
PyObject_CallFinalizerFromDealloc(), instead of calling up the stack in
tp_finalize(). Otherwise, it's a bit fragile for arbitrary tp_dealloc()
functions in base types and subtypes. However, that appears like a rather
cumbersome and inefficient design. It also somewhat counters the advantage
of having a finalisation step before deallocation, if the finalisers are
only called after (partially) cleaning up the subtypes.

ISTM that this feature hasn't been fully thought out...

Stefan


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.4.0a1

2013-08-04 Thread Larry Hastings

On 08/04/2013 07:01 AM, Eli Bendersky wrote:

Larry, if there are other things you're going to add, update the web
page http://www.python.org/download/releases/3.4.0/ as well - it's the
one being linked in the inter-webs now.


Good thinking!  I'll do that today.


//arry/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Allowing to run certain regression tests in subprocesses

2013-08-04 Thread Eli Bendersky
On Sun, Aug 4, 2013 at 7:26 AM, Nick Coghlan  wrote:
> On 4 August 2013 23:40, Eli Bendersky  wrote:
>> On Sat, Aug 3, 2013 at 7:08 PM, Nick Coghlan  wrote:
>>> Sure, it's just unusual to have a case where "importing is blocked by adding
>>> None to sys.modules" differs from "not actually available", so I'd like to
>>> understand the situation better.
>>
>> I must admit I'm confused by the behavior of import_fresh_module too.
>>
>> Snippet #1 raises the expected ImportError:
>>
>> sys.modules['pyexpat'] = None
>> import _elementtree
>>
>> However, snippet #2 succeeds importing:
>>
>> ET = import_fresh_module('_elementtree', blocked=['pyexpat'])
>> print(ET)
>
> /me goes and looks
>
> That function was much simpler when it was first created :P
>
> Still, I'm fairly confident the complexity of that dance isn't
> relevant to the problem you're seeing.
>
>> I believe this happens because import_fresh_module does an import of
>> the 'name' it's given before even looking at the blocked list. Then,
>> it assigns None to sys.modules for the blocked names and re-imports
>> the module. So in essence, this is somewhat equivalent to snippet #3:
>>
>> modname = '_elementtree'
>> __import__(modname)
>> del sys.modules[modname]
>> for m in sys.modules:
>> if modname == m or m.startswith(modname + '.'):
>> del sys.modules[m]
>> sys.modules['pyexpat'] = None
>> ET = importlib.import_module(modname)
>> print(ET)
>>
>> Which also succeeds.
>>
>> I fear I'm not familiar enough with the logic of importing to
>> understand what's going on, but it has been my impression that this
>> problem is occasionally encountered with import_fresh_module and C
>> code that imports stuff (the import of pyexpat is done by C code in
>> this case).
>
> I had missed it was a C module doing the import. Looking into the
> _elementtree.c source, the problem in this case is the fact that a
> shared library that doesn't use PEP 3121 style per-module state is
> only loaded and initialised once, so reimporting it gets the same
> module back (from the extension loading cache), even if the Python
> level reference has been removed from sys.modules. Non PEP 3121 C
> extension modules thus don't work properly with
> test.support.import_fresh_module (as there's an extra level of caching
> involved that *can't* be cleared from Python, because it would break
> things).
>
> To fix this, _elementree would need to move the pyexpat C API pointer
> to per-module state, rather than using a static variable (see
> http://docs.python.org/3/c-api/module.html#initializing-c-modules).
> With per-module state defined, the import machine should rerun the
> init function when the fresh import happens, thus creating a new copy
> of the module. However, this isn't an entirely trivial change for
> _elementree, since:
>
> 1. Getting from the XMLParser instance back to the module where it was
> defined in order to retrieve the capsule pointer via
> PyModule_GetState() isn't entirely trivial in C. You'd likely do it
> once in the init method, store the result in an XMLParser attribute,
> and then tweak the EXPAT() using functions to include an appropriate
> local variable definition at the start of the method implementation.
>
> 2. expat_set_error would need to be updated to accept the pyexpat
> capsule pointer as a function parameter
>

Thanks Nick; I suspected something of the sort is going on here, but
you provide some interesting leads to look at. I'll probably open an
issue to track this at some point.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com