[Python-Dev] Does Zip Importer have to be Special?

2014-07-24 Thread Phil Thompson
I have an importer for use in applications that embed an interpreter 
that does a similar job to the Zip importer (except that the storage is 
a C data structure rather than a .zip file). Just like the Zip importer 
I need to import my importer and add it to sys.path_hooks. However the 
earliest opportunity I have to do this is after the Py_Initialize() call 
returns - but this is too late because some parts of the standard 
library have already needed to be imported.


My current workaround is to include a modified version of _bootstrap.py 
as a frozen module that has the necessary steps added to the end of its 
_install() function.


The Zip importer doesn't have this problem because it gets special 
treatment - the call to its equivalent code is hard-coded and happens 
exactly when needed.


What would help is a table of functions that were called where 
_PyImportZip_Init() is currently called. By default the only entry in 
the table would be _PyImportZip_Init. There would be a way of modifying 
the table, either like how PyImport_FrozenModules is handled or how 
Inittab is handled.


...or if there is a better solution that I have missed that doesn't 
require a modified _bootstrap.py.


Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does Zip Importer have to be Special?

2014-07-24 Thread Phil Thompson

On 24/07/2014 6:48 pm, Brett Cannon wrote:
On Thu Jul 24 2014 at 1:07:12 PM, Phil Thompson 


wrote:


I have an importer for use in applications that embed an interpreter
that does a similar job to the Zip importer (except that the storage 
is
a C data structure rather than a .zip file). Just like the Zip 
importer

I need to import my importer and add it to sys.path_hooks. However the
earliest opportunity I have to do this is after the Py_Initialize() 
call

returns - but this is too late because some parts of the standard
library have already needed to be imported.

My current workaround is to include a modified version of 
_bootstrap.py
as a frozen module that has the necessary steps added to the end of 
its

_install() function.

The Zip importer doesn't have this problem because it gets special
treatment - the call to its equivalent code is hard-coded and happens
exactly when needed.

What would help is a table of functions that were called where
_PyImportZip_Init() is currently called. By default the only entry in
the table would be _PyImportZip_Init. There would be a way of 
modifying

the table, either like how PyImport_FrozenModules is handled or how
Inittab is handled.

...or if there is a better solution that I have missed that doesn't
require a modified _bootstrap.py.



Basically you want a way to specify arguments into
importlib._bootstrap._install() so that sys.path_hooks and 
sys.meta_path
were configurable instead of hard-coded (it could also be done just 
past
importlib being installed, but that's a minor detail). Either way there 
is
technically no reason not to allow for it, just lack of motivation 
since

this would only come up for people who embed the interpreter AND have a
custom importer which affects loading the stdlib as well (any reason 
you

can't freeze the stdblib as a solution?).


Not really. I'd lose the compression my importer implements.

(Are there any problems with freezing packages rather than simple 
modules?)



We could go the route of some static array that people could modify.
Another option would be to allow for the specification of a single 
function

which is called just prior to importing the rest of the stdlib,

The problem with all of this is you are essentially asking for a hook 
to
let you have code have access to the interpreter state before it is 
fully
initialized. Zipimport and the various bits of code that get loaded 
during
startup are special since they are coded to avoid touching anything 
that
isn't ready to be used. So if we expose something that allows access 
prior

to full initialization it would have to be documented as having no
guarantees of interpreter state, etc. so we are not held to some API 
that

makes future improvements difficult.

IOW allowing for easy patching of Python is probably the best option I 
can

think of. Would tweaking importlib._bootstrap._install() to accept
specified values for sys.meta_path and sys.path_hooks be enough so that 
you

can change the call site for those functions?


My importer runs under PathFinder so it needs sys.path as well (and 
doesn't need sys.meta_path).


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does Zip Importer have to be Special?

2014-07-25 Thread Phil Thompson

On 24/07/2014 9:42 pm, Nick Coghlan wrote:

On 25 Jul 2014 03:51, "Brett Cannon"  wrote:

The problem with all of this is you are essentially asking for a hook 
to
let you have code have access to the interpreter state before it is 
fully
initialized. Zipimport and the various bits of code that get loaded 
during
startup are special since they are coded to avoid touching anything 
that
isn't ready to be used. So if we expose something that allows access 
prior

to full initialization it would have to be documented as having no
guarantees of interpreter state, etc. so we are not held to some API 
that

makes future improvements difficult.

Note that this is *exactly* the problem PEP 432 is designed to handle:
separating the configuration of the core interpreter from the 
configuration

of the operating system interfaces, so the latter can run relatively
normally (at least compared to today).


The implementation of PEP 432 would be great.


As you say, though it's a niche problem compared to something like
packaging, which is why it got bumped down my personal priority list. I
haven't even got back to the first preparatory step I identified which 
is
to separate out our main functions to a separate "Programs" directory 
so
it's easier to distinguish "embeds Python" sections of the code from 
the

more typical "is part of Python" and "extends Python" code.


Is there any way for somebody you don't trust :) to be able to help move 
it forward?


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Does Zip Importer have to be Special?

2014-07-25 Thread Phil Thompson

On 24/07/2014 7:26 pm, Brett Cannon wrote:
On Thu Jul 24 2014 at 2:12:20 PM, Phil Thompson 


wrote:


On 24/07/2014 6:48 pm, Brett Cannon wrote:
> IOW allowing for easy patching of Python is probably the best option I
> can
> think of. Would tweaking importlib._bootstrap._install() to accept
> specified values for sys.meta_path and sys.path_hooks be enough so that
> you
> can change the call site for those functions?

My importer runs under PathFinder so it needs sys.path as well (and
doesn't need sys.meta_path).


sys.path can be set via PYTHONPATH, etc. so that shouldn't be as much 
of an

issue.


I prefer to have Py_IgnoreEnvironmentFlag set.

Also I'm not clear at what point I would import my custom importer?

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exposing the Android platform existence to Python modules

2014-08-02 Thread Phil Thompson

On 02/08/2014 4:34 am, Guido van Rossum wrote:

Or SL4A? (https://github.com/damonkohler/sl4a)


On Fri, Aug 1, 2014 at 8:06 PM, Steven D'Aprano  
wrote:



On Sat, Aug 02, 2014 at 05:53:45AM +0400, Akira Li wrote:

> Python uses os.name, sys.platform, and various functions from `platform`
> module to provide version info:
[...]
> If Android is posixy enough (would `posix` module work on Android?)
> then os.name could be left 'posix'.

Does anyone know what kivy does when running under Android?


I don't think either do anything.

As the OP said, porting Python to Android is mainly about dealing with a 
C stdlib that is limited in places. Therefore there might be the odd 
missing function or attribute in the Python stdlib - just the same as 
can happen with other platforms.


To me the issue is whether, for a particular value of sys.platform, the 
programmer can expect a particular Python stdlib API. If so then Android 
needs a different value for sys.platform.


On the other hand if the programmer should not expect to make such an 
assumption, and should instead allow for the absence of certain 
functions (but which ones?), then the existing value of 'linux' should 
be fine.


Another option I don't think I've seen suggested, given the recommended 
way of testing for Linux is to use sys.platform.startswith('linux'), is 
to use a value of 'linux-android'.


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exposing the Android platform existence to Python modules

2014-08-02 Thread Phil Thompson

On 02/08/2014 7:36 pm, Guido van Rossum wrote:
On Sat, Aug 2, 2014 at 12:53 AM, Phil Thompson 


wrote:

To me the issue is whether, for a particular value of sys.platform, 
the
programmer can expect a particular Python stdlib API. If so then 
Android

needs a different value for sys.platform.



sys.platform is for a broad indication of the OS kernel. It can be used 
to
distinguish Windows, Mac and Linux (and BSD, Solaris etc.). Since 
Android

is Linux it should have the same sys.platform as other Linux systems
('linux2'). If you want to know whether a specific syscall is there, 
check

for the presence of the method in the os module.


It's not just the os module - other modules contain code that would be 
affected, but there are plenty of other parts of the Python stdlib that 
aren't implemented on every platform. Using the approach you prefer then 
all that's needed is to update the documentation to say that certain 
things are not implemented on Android.


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exposing the Android platform existence to Python modules

2014-08-03 Thread Phil Thompson

On 03/08/2014 4:58 pm, Guido van Rossum wrote:
But *are* we going to support Android officially? What's the point? Do 
you
have a plan for getting Python apps to first-class status in the App 
Store

(um, Google Play)?


I do...

http://pyqt.sourceforge.net/Docs/pyqtdeploy/introduction.html

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.5.1 plans

2015-11-01 Thread Phil Thompson
On 1 Nov 2015, at 10:30 a.m., Chris Angelico  wrote:
> 
> PEP 478 [1] doesn't currently have any info on a planned 3.5.1 release
> (and actually, it has 3.5.0 Final listed as a future release). About
> when is it likely to happen? The one thing I'm hanging out for is an
> installer patch on Windows that detects XP and immediately aborts with
> a convenient error; if the number of emails to python-list is
> indicative, there are a lot of people out there getting confused.

That doesn't need a new version of Python, just a new installer.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] zipimport.c broken with implicit namespace packages

2016-01-03 Thread Phil Thompson
On 3 Jan 2016, at 3:41 am, Guido van Rossum  wrote:
> 
> On Sat, Jan 2, 2016 at 3:26 PM,  wrote:
> 
> --
> > "Brett" == Brett Cannon  writes:
> 
> > I opened
> > https://bugs.python.org/issue25711 to specifically try to
> > fix this issue once and for all and along the way modernize
> > zipimport by rewriting it from scratch to be more
> > maintainable
> 
>   Every time I read about impementing a custom loader:
> 
> https://docs.python.org/3/library/importlib.html
> 
>   I've wondered why python does not have some sort of virtual
> filesystem layer to deal with locating modules/packages/support
> files.   Virtual file systems seem like a good way to store data on a
> wide range of storage devices.
> 
> Yeah, but most devices already implement a *real* filesystem, so the only 
> time the VFS would come in handy would be for zipfiles, where we already have 
> a solution.

Just to point out that it would be nice to have an easier way to use something 
other that zipfiles. I have a need to exploit a different solution and have to 
patch the bootstrap code (because the zipfile support is handled as a special 
case). BTW the need is to create iOS and Android executables from frozen Python 
code.

Phil

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] zipimport.c broken with implicit namespace packages

2016-01-04 Thread Phil Thompson
On 3 Jan 2016, at 5:33 pm, Brett Cannon  wrote:
> 
> 
> 
> On Sun, 3 Jan 2016 at 02:55 Phil Thompson  wrote:
> On 3 Jan 2016, at 3:41 am, Guido van Rossum  wrote:
> >
> > On Sat, Jan 2, 2016 at 3:26 PM,  wrote:
> >
> > --
> > >>>>> "Brett" == Brett Cannon  writes:
> >
> > > I opened
> > > https://bugs.python.org/issue25711 to specifically try to
> > > fix this issue once and for all and along the way modernize
> > > zipimport by rewriting it from scratch to be more
> > > maintainable
> >
> >   Every time I read about impementing a custom loader:
> >
> > https://docs.python.org/3/library/importlib.html
> >
> >   I've wondered why python does not have some sort of virtual
> > filesystem layer to deal with locating modules/packages/support
> > files.   Virtual file systems seem like a good way to store data on a
> > wide range of storage devices.
> >
> > Yeah, but most devices already implement a *real* filesystem, so the only 
> > time the VFS would come in handy would be for zipfiles, where we already 
> > have a solution.
> 
> Just to point out that it would be nice to have an easier way to use 
> something other that zipfiles. I have a need to exploit a different solution 
> and have to patch the bootstrap code (because the zipfile support is handled 
> as a special case). BTW the need is to create iOS and Android executables 
> from frozen Python code.
> 
> Not quite sure about how zip files are a special-case beyond just being put 
> in sys.meta_path automatically. You can get the same results with a .pth file 
> or a sitecustomize.py depending on how pervasive your need is. Otherwise feel 
> free to file an issue at bugs.python.org and we can talk over there about 
> what you specifically need and if it's reasonable to try and support. 

I've created http://bugs.python.org/issue26007 and hope it's clear enough what 
the issue is.

Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Experiences with Creating PEP 484 Stub Files

2016-02-09 Thread Phil Thompson
I've been adding support to the SIP wrapper generator for automatically 
generating PEP 484 compatible stub files so that future versions of PyQt can be 
shipped with them. By way of feedback I thought I'd share my experience, 
confusions and suggestions.

There are a number of things I'd like to express but cannot find a way to do 
so...

- objects that implement the buffer protocol
- type objects
- slice objects
- capsules
- sequences of fixed size (ie. specified in the same way as Tuple)
- distinguishing between instance and class attributes.

The documentation is incomplete - there is no mention of Set or Tuple for 
example.

I found the documentation confusing regarding Optional. Intuitively it seems to 
be the way to specify arguments with default values. However it is explained in 
terms of (for example) Union[str, None] and I (intuitively but incorrectly) 
read that as meaning "a str or None" as opposed to "a str or nothing".

bytes can be used as shorthand for bytes, bytearray and memoryview - but what 
about objects that really only support bytes? Shouldn't the shorthand be called 
something like AnyBytes?

Is there any recommended way to test the validity and completeness of stub 
files? What's the recommended way to parse them?

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Experiences with Creating PEP 484 Stub Files

2016-02-09 Thread Phil Thompson
On 9 Feb 2016, at 8:54 pm, Guido van Rossum  wrote:
> 
> [Just adding to Andrew's response]
> 
> On Tue, Feb 9, 2016 at 9:58 AM, Andrew Barnert via Python-Dev
>  wrote:
>> On Feb 9, 2016, at 03:44, Phil Thompson  wrote:
>>> 
>>> There are a number of things I'd like to express but cannot find a way to 
>>> do so...
>>> 
>>> - objects that implement the buffer protocol
>> 
>> That seems like it should be filed as a bug with the typing repo. Presumably 
>> this is just an empty type that registers bytes, bytearray, and memoryview, 
>> and third-party classes have to register with it manually?
> 
> Hm, there's no way to talk about these in regular Python code either,
> is there? I think that issue should be resolved first. Probably by
> adding something to collections.abc. And then we can add the
> corresponding name to typing.py. This will take time though (have to
> wait for 3.6) so I'd recommend 'Any' for now (and filing those bugs).

Ok.

>>> - type objects
> 
> You can use 'type' for this (i.e. the builtin). You can't specify any
> properties for types though; that's a feature request:
> https://github.com/python/typing/issues/107 -- but it may be a while
> before we address it (it's not entirely clear how it should work, and
> we have many other pressing issues still).

Yes, I can use type.

>>> - slice objects
> 
>> Can't you just use the concrete types type and slice tor these two? I don't 
>> think you need generic or abstract "any metaclass, whether inheriting from 
>> type or not" or "any class that meets the slice protocol", do you?
> 
> Can't you use 'slice' (i.e. the builtin)? Mypy supports that.

Yes, I can use slice.

>>> - capsules
>> 
>> That one seems reasonable. But maybe there should just be a types.Capsule 
>> Type or types.PyCapsule, and then you can just check that the same as any 
>> other concrete type?
>> 
>> But how often do you need to verify that something is a capsule, without 
>> knowing that it's the *right* capsule? At runtime, you'd usually use 
>> PyCapsule_IsValid, not PyCapsule_CheckExacf, right? So should the type 
>> checker be tracking the name too?
>> 
>>> - sequences of fixed size (ie. specified in the same way as Tuple)
> 
> That's kind of a poor data structure. :-( Why can't you use Tuple here?

Because allowing any sequence is more flexible that only allowing a tuple.

>> How would you disambiguate between a sequence of one int and a sequence of 0 
>> or more ints if they're both spelled "Sequence[int]"? That isn't a problem 
>> for Tuple, because it's assumed to be heterogeneous, so Tuple[int] can only 
>> be a 1-tuple. (This was actually discussed in some depth. I thought it would 
>> be a problem, because some types--including tuple itself--are sometimes used 
>> as homogenous arbitrary-length containers and sometimes as heterogeneous 
>> fixed-length containers, but Guido and others had some good answers for 
>> that, even if I can't remember what they were.)
> 
> We solved that by allowing Tuple[int, ...] to spell a homogeneous
> tuple of integers.
> 
>>> - distinguishing between instance and class attributes.
>> 
>> Where? Are you building a protocol that checks the data members of a type 
>> for conformance or something? If so, why is an object that has "spam" and 
>> "eggs" as instance attributes but "cheese" as a class attribute not usable 
>> as an object conforming to the protocol with all three attributes? (Also, 
>> does @property count as a class or instance attribute? What about an 
>> arbitrary data descriptor? Or a non-data descriptor?)
> 
> It's a known mypy bug. :-( It's somewhat convoluted to fix.
> https://github.com/JukkaL/mypy/issues/1097
> 
> Some things Andrew snipped:
> 
>> The documentation is incomplete - there is no mention of Set or Tuple for 
>> example.
> 
> Tuple is here: https://docs.python.org/3/library/typing.html#typing.Tuple

Yes, I missed that.

> collections.Set maps to typing.AbstractSet
> (https://docs.python.org/3/library/typing.html#typing.AbstractSet;
> present twice in the docs somehow :-( ). typing.Set (corresponding to
> builtins.set) is indeed missing, I've a note of that:
> http://bugs.python.org/issue26322.
> 
>> I found the documentation confusing regarding Optional. Intuitively it seems 
>> to be the way to specify arguments with default values. However it is 
>> explained in terms

Re: [Python-Dev] Experiences with Creating PEP 484 Stub Files

2016-02-10 Thread Phil Thompson

> On 9 Feb 2016, at 11:48 pm, Guido van Rossum  wrote:
> 
> [Phil]
 I found the documentation confusing regarding Optional. Intuitively it 
 seems to be the way to specify arguments with default values. However it 
 is explained in terms of (for example) Union[str, None] and I (intuitively 
 but incorrectly) read that as meaning "a str or None" as opposed to "a str 
 or nothing".
> [me]
>>> But it *does* mean 'str or None'. The *type* of an argument doesn't
>>> have any bearing on whether it may be omitted from the argument list
>>> by the caller -- these are orthogonal concepts (though sadly the word
>>> optional might apply to both). It's possible (though unusual) to have
>>> an optional argument that must be a str when given; it's also possible
>>> to have a mandatory argument that may be a str or None.
> [Phil]
>> In the case of Python wrappers around a C++ library then *every* optional 
>> argument will have to have a specific type when given.
> 
> IIUC you're saying that every argument that may be omitted must still
> have a definite type other than None. Right? In that case just don't
> use Optional[]. If a signature has the form
> 
> def foo(a: str = 'xyz') -> str: ...
> 
> then this means that str may be omitted or it may be a str -- you
> cannot call foo(a=None).
> 
> You can even (in a stub file) write this as:
> 
> def foo(a: str = ...) -> str: ...
> 
> (literal '...' i.e. ellipsis) if you don't want to commit to a
> specific default value (it makes no difference to mypy).
> 
>> So you are saying that a mandatory argument that may be a str or None would 
>> be specified as Union[str, None]?
> 
> Or as Optional[str], which means the same.
> 
>> But the docs say that that is the underlying implementation of Option[str] - 
>> which (to me) means an optional argument that should be a string when given.
> 
> (Assuming you meant Option*al*.) There seems to be an utter confusion
> of the two uses of the term "optional" here. An "optional argument"
> (outside PEP 484) is one that has a default value. The "Optional[T]"
> notation in PEP 484 means "Union[T, None]". They mean different
> things.
> 
>>> Can you help improve the wording in the docs (preferably by filing an 
>>> issue)?
>> 
>> When I eventually understand what it means...

I understand now. The documentation, as it stands, is correct and consistent 
but (to me) the meaning of Optional is completely counter-intuitive. What you 
suggest with str = ... is exactly what I need. Adding a section to the docs 
describing that should clear up the confusion.

Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Experiences with Creating PEP 484 Stub Files

2016-02-10 Thread Phil Thompson
On 10 Feb 2016, at 5:52 pm, Guido van Rossum  wrote:
> 
> On Wed, Feb 10, 2016 at 1:11 AM, Phil Thompson
>  wrote:
>> I understand now. The documentation, as it stands, is correct and consistent 
>> but (to me) the meaning of Optional is completely counter-intuitive. What 
>> you suggest with str = ... is exactly what I need. Adding a section to the 
>> docs describing that should clear up the confusion.
> 
> I tried to add some clarity to the docs with this paragraph:
> 
>   Note that this is not the same concept as an optional argument,
>   which is one that has a default.  An optional argument with a
>   default needn't use the ``Optional`` qualifier on its type
>   annotation (although it is inferred if the default is ``None``).
>   A mandatory argument may still have an ``Optional`` type if an
>   explicit value of ``None`` is allowed.
> 
> Should be live on docs.python.org with the next push (I don't recall
> the delay, at most a day IIRC).

That should do it, thanks. A followup question...

Is...

def foo(bar: str = Optional[str])

...valid? In other words, bar can be omitted, but if specified must be a str or 
None?

Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] _PyUnicode_CheckConsistency() too strict?

2014-02-03 Thread Phil Thompson
_PyUnicode_CheckConsistency() checks that the contents of the string 
matches the _KIND of the string. However it does this in a very strict 
manner, ie. that the contents *exactly* match the _KIND rather than just 
detecting an inconsistency between the contents and the _KIND.


For example, a string created with a maxchar of 255 (ie. a Latin-1 
string) must contain at least one character in the range 128-255 
otherwise you get an assertion failure.


As it stands, when converting Latin-1 strings in my C extension module 
I must first check each character and specify a maxchar of 127 if the 
strings happens to only contain ASCII characters.


What is the reasoning behind the checks being so strict?

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _PyUnicode_CheckConsistency() too strict?

2014-02-03 Thread Phil Thompson

On 03-02-2014 3:35 pm, Victor Stinner wrote:

2014-02-03 Phil Thompson :
For example, a string created with a maxchar of 255 (ie. a Latin-1 
string)
must contain at least one character in the range 128-255 otherwise 
you get

an assertion failure.


Yes, it's the specification of the PEP 393.

As it stands, when converting Latin-1 strings in my C extension 
module I
must first check each character and specify a maxchar of 127 if the 
strings

happens to only contain ASCII characters.


Use PyUnicode_FromKindAndData(PyUnicode_1BYTE_KIND, latin1_str,
length) which computes the kind for you.


What is the reasoning behind the checks being so strict?


Different Python functions rely on the exact kind to compare strings.
For example, if you search a latin1 substring in an ASCII string, the
search returns immediatly instead of searching in the string. A 
latin1

string cannot be found in an ASCII string.

The main reason in the PEP 393 itself, a string must be compact to 
not

waste memory.

Victor


Are you saying that code will fail if a particular Latin-1 string just 
happens not to contains any character greater than 127?


I would be very surprised if that was the case. If it isn't the case 
then I think that particular check shouldn't be made.


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _PyUnicode_CheckConsistency() too strict?

2014-02-03 Thread Phil Thompson

On 03-02-2014 4:04 pm, Victor Stinner wrote:

2014-02-03 Phil Thompson :
Are you saying that code will fail if a particular Latin-1 string 
just

happens not to contains any character greater than 127?


PyUnicode_FromKindAndData(PyUnicode_1BYTE_KIND, latin1_str, length)
accepts latin1 and ASCII strings. It computes the maximum code point
and then use ASCII or latin1 unicode string.

Victor


That doesn't answer my original question, that just works around the 
use case I presented.


To restate...

Why is a Latin-1 string considered inconsistent just because it doesn't 
happen to contain any characters in the range 128-255?


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _PyUnicode_CheckConsistency() too strict?

2014-02-03 Thread Phil Thompson

On 03-02-2014 4:38 pm, Paul Moore wrote:
On 3 February 2014 16:10, Phil Thompson  
wrote:
That doesn't answer my original question, that just works around the 
use

case I presented.

To restate...

Why is a Latin-1 string considered inconsistent just because it 
doesn't

happen to contain any characters in the range 128-255?


Butting in here (sorry) but I thought what Victor was trying to say 
is

that being able to say that a string marked as Latin1 "kind"
definitely has characters >127 allows the code to optimise some tests
(for example, two strings cannot be equal if their kinds differ).


So there *is* code that will fail if a particular Latin-1 string just 
happens not to contains any character greater than 127?



Obviously, requiring this kind of constraint makes it somewhat harder
for user code to construct string objects that conform to the spec.
That's why the PyUnicode_FromKindAndData function has the convenience
feature of doing the check and setting the kind correctly for you -
you should use that rather than trying to get the details right
yourself..

Paul.


I see now...

The docs for PyUnicode_FromKindAndData() say...

"Create a new Unicode object *with* the given kind"

...and so I didn't think is was useful to me. If they said...

"Create a new Unicode object *from* the given kind"

...then I might have got it.

Thanks - I'm happy now.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _PyUnicode_CheckConsistency() too strict?

2014-02-03 Thread Phil Thompson

On 03-02-2014 5:52 pm, Guido van Rossum wrote:

Can we provide a convenience API (or even a few lines of code one
could copy+paste) that determines if a particular 8-bit string
should  have max-char equal to 127 or 255? I can easily imagine a
number of use cases where this would come in handy (e.g. a list of
strings produced by translation, or strings returned in Latin-1 by
some other non-Python C-level API) -- and lets not get into a debate
about whether UTF-8 wouldnt be better, I can also easily imagine
legacy APIs where that isnt (yet) an option.


For my particular use case PyUnicode_FromKindAndData() (once I'd 
interpreted the docs correctly) should have made such code unnecessary. 
However I've just discovered that it doesn't support surrogates in UCS2 
so I'm going to have to roll my own anyway.


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.7 patch levels turning two digit

2014-06-21 Thread Phil Thompson

On 21/06/2014 10:37 pm, M.-A. Lemburg wrote:

That said, and I also included this in my answers to the questions
that Nick removed in his reply, I don't think that a lot of
code would be affected by this. I do believe that we can use
this potential breakage as a chance for improvement. See the last
question (listed here again)...

1. Is it a good strategy to ship to Python releases for every
   single OpenSSL security release or is there a better way to
   handle these 3rd party issues ?


Isn't this only a packaging issue? There is no change to the Python API 
or implementation, so there is no need to change the version number. So 
just make new Windows packages.


The precedent is to add a dash and a package number. I can't remember 
what version this was applied to before - but I got a +1 from Guido for 
suggesting it :)


Phil
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] v3.8b1 Breaks PyQt on Windows (Issue 36085/os.add_dll_directory())

2019-06-22 Thread Phil Thompson
The implementation of issue 36085 breaks PyQt on Windows as it relies on 
PATH to find the Qt DLLs. The problem is that PyQt is built using the 
stable ABI and a single wheel is supposed to support all versions of 
Python starting with v3.5. On the assumption (perhaps naive) that using 
the stable ABI would avoid future compatibility issues, the existing 
PyPI wheels have long been tagged with cp38.


Was this issue considered at the time? What is the official view?

Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YFNKFRJGNM25VUGDJ5PVCQM4WPLZU6J7/


[Python-Dev] Re: v3.8b1 Breaks PyQt on Windows (Issue 36085/os.add_dll_directory())

2019-06-23 Thread Phil Thompson

Carol,

I'm "happy" with Steve's position. Fundamentally I am at fault for 
assuming that a combination of the stable ABI and Python's deprecation 
policy meant that I could assume that a wheel for Python v3.x would 
continue to work for v3.x+1. For this particular change I don't see how 
a normal deprecation warning could have been implemented. The only 
alternative would have been to delay the implementation for v3.9 and 
have loud warnings in the v3.8 docs about the upcoming change.


Phil

On 23/06/2019 00:06, Carol Willing wrote:

Hi Phil,

Thanks for trying the beta. Please file this as an issue at
bugs.python.org. Doing so would be helpful for folks who can look into
the issue.

Thanks,

Carol

On 6/22/19 2:04 PM, Phil Thompson wrote:
The implementation of issue 36085 breaks PyQt on Windows as it relies 
on PATH to find the Qt DLLs. The problem is that PyQt is built using 
the stable ABI and a single wheel is supposed to support all versions 
of Python starting with v3.5. On the assumption (perhaps naive) that 
using the stable ABI would avoid future compatibility issues, the 
existing PyPI wheels have long been tagged with cp38.


Was this issue considered at the time? What is the official view?

Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YFNKFRJGNM25VUGDJ5PVCQM4WPLZU6J7/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VWIWASHPBMLQDS7PQX3LXILT56Q2KCPO/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-11 Thread Phil Thompson

On 11/04/2020 13:08, Ivan Pozdeev via Python-Dev wrote:

On 10.04.2020 20:20, Victor Stinner wrote:



Stable ABI
--

The idea is to build a C extension only once: the built binary will be
usable on multiple Python runtimes and different versions of the same
runtime (stable ABI).

The idea is not new but is an extension of the `PEP 384: Defining a
Stable ABI `__ implemented 
in

CPython 3.4 with its "limited C API". The limited API is not used by
default and is not widely used: PyQt is one of the only few known 
users.


The idea here is that the default C API becomes the limited C API and 
so

all C extensions will benefit of advantages of a stable ABI.


In my practice with helping maintain a C extension module, it's not a
problem to build the module separately for every minor release.

That's because there are only a few officially supported releases, and
they aren't released frequently.

Conversely, if you are using a "limited ABI", you are "limited" (pun
intended) to what it has and can't take advantage of any new features
until the next major Python version -- i.e. for potentially several
years!

So I don't see any "advantages of a stable ABI" atm that matter in
practice while I do see _dis_advantages. So this area can perhaps be
excluded from the PEP or at least given low priority.
Unless, of course, you have some other, more real upcoming "advantages" 
in mind.


PyQt uses the stable ABI because it dramatically reduces the number of 
wheels that need to be created for a full release.


PyQt consists of 6 different PyPI packages. Wheels are provided for 4 
different platforms. Currently Python v3.5 to v3.8 are supported.


With the stable ABI that's 24 wheels for a full release. No additional 
wheels are needed when Python v3.9 is supported.


Without the stable ABI it would be 96 wheels. 24 additional wheels would 
be needed when Python v3.9 is supported.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GJLDX6SYWLOJ7JFAE6LCJZ6WEQJAYRGG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Binary Compatibility Issue with Python v2.6.5 and v3.1.2

2010-04-20 Thread Phil Thompson
When I build my C++ extension on Windows (specifically PyQt with MinGW)
against Python v2.6.5 it fails to run under v2.6.4. The same problem exists
when building against v3.1.2 and running under v3.1.1.

The error message is...

ImportError: DLL load failed: The specified procedure could not be found.

...though I don't know what the procedure is.

When built against v2.6.4 it runs fine under all v2.6.x. When built under
v3.1.1 it runs fine under all v3.1.x.

I had always assumed that an extension built with vX.Y.Z would always run
under vX.Y.Z-1.

Am I wrong in that assumption, or is this a bug in the latest versions?

Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Binary Compatibility Issue with Python v2.6.5 and v3.1.2

2010-04-20 Thread Phil Thompson
On Tue, 20 Apr 2010 21:50:51 +0900, David Cournapeau 
wrote:
> On Tue, Apr 20, 2010 at 9:19 PM, Phil Thompson
>  wrote:
>> When I build my C++ extension on Windows (specifically PyQt with MinGW)
>> against Python v2.6.5 it fails to run under v2.6.4. The same problem
>> exists
>> when building against v3.1.2 and running under v3.1.1.
>>
>> The error message is...
>>
>> ImportError: DLL load failed: The specified procedure could not be
found.
>>
>> ...though I don't know what the procedure is.
>>
>> When built against v2.6.4 it runs fine under all v2.6.x. When built
under
>> v3.1.1 it runs fine under all v3.1.x.
>>
>> I had always assumed that an extension built with vX.Y.Z would always
run
>> under vX.Y.Z-1.
> 
> I don't know how well it is handled in python, but this is extremely
> hard to do in general - you are asking about forward compatibility,
> not backward compatibility.

Yes, I know.

> Is there a reason why you need to do this ? The usual practice is to
> build against the *oldest* compatible version you can, so that it
> remains compatible with everything afterwards,

I'm happy to do that if that is the recommendation, although this is the
first time I've noticed this sort of problem (since v1.5).

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Binary Compatibility Issue with Python v2.6.5 and v3.1.2

2010-04-20 Thread Phil Thompson
On Tue, 20 Apr 2010 22:24:44 +0200, "Martin v. Löwis" 
wrote:
> Phil Thompson wrote:
>> When I build my C++ extension on Windows (specifically PyQt with MinGW)
>> against Python v2.6.5 it fails to run under v2.6.4. The same problem
>> exists
>> when building against v3.1.2 and running under v3.1.1.
>> 
>> The error message is...
>> 
>> ImportError: DLL load failed: The specified procedure could not be
found.
>> 
>> ...though I don't know what the procedure is.
>> 
>> When built against v2.6.4 it runs fine under all v2.6.x. When built
under
>> v3.1.1 it runs fine under all v3.1.x.
>> 
>> I had always assumed that an extension built with vX.Y.Z would always
run
>> under vX.Y.Z-1.
>> 
>> Am I wrong in that assumption, or is this a bug in the latest versions?
> 
> You are not wrong in that assumption, but it still might not be a bug in
> the latest version. It could also be a bug in MingW or PyQt.
> 
> Before we can judge on that, we need to understand what exactly happened.
> 
> As a starting point for further research, try the sxstrace utility of
> your Vista installation.

What Vista installation? XP I'm afraid...

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread Phil Thompson
On Monday 05 March 2007 6:46 pm, A.M. Kuchling wrote:
> >From :
>
>   4. The patch mafia. I like everyone on python-dev that I meet,
>   but somehow it is annoyingly difficult to get a patch into
>   Python. Like threading, and the stdlib, this is a mixed
>   blessing: you certainly don't want every Joe Schmoe checking
>   in whatever crud he wants. However, the barrier is high enough
>   that I no longer have much interest in spending the time to
>   shepherd a patch through. Yes, this is probably all my fault
>   -- but I still hate it!
>
> FWIW, I have a related perception that we aren't getting new core
> developers. These two problems are probably related: people don't get
> patches processed and don't become core developers, and we don't have
> enough core developers to process patches in a timely way.  And so
> we're stuck.
>
> Any ideas for fixing this problem?

1. Don't suggest to people that, in order to get their patch reviewed, they 
should review other patches. The level of knowledge required to put together 
a patch is much less than that required to know if a patch is the right one.

2. Publically identify the core developers and their areas of expertise and 
responsibility (ie. which parts of the source tree they "own").

3. Provide a forum (a python-patch mailing list) where patches can be 
proposed, reviewed and revised informally but quickly.

4. Acceptance by core developers that only half the "job" is developing the 
core - the other half is mentoring potential future core developers.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread Phil Thompson
On Monday 05 March 2007 8:09 pm, Oleg Broytmann wrote:
> On Mon, Mar 05, 2007 at 07:30:20PM +0000, Phil Thompson wrote:
> > 1. Don't suggest to people that, in order to get their patch reviewed,
> > they should review other patches. The level of knowledge required to put
> > together a patch is much less than that required to know if a patch is
> > the right one.
>
>I am afraid this could lead to proliferation of low-quality patches. A
> patch must touch at least code, documentation and tests, be tested itself
> and must not break other tests. These requirements demand higher expertise.

I'm not sure what your point is. My point is that, if you want to encourage 
people to become core developers, they have to have a method of graduating 
through the ranks - learning (and being taught) as they go. To place a very 
high obstacle in their way right at the start is completely 
counter-productive.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-05 Thread Phil Thompson
On Monday 05 March 2007 9:38 pm, Thomas Wouters wrote:
> On 3/5/07, A.M. Kuchling <[EMAIL PROTECTED]> wrote:
> > >From  >
> > 4. The patch mafia. I like everyone on python-dev that I meet,
> > but somehow it is annoyingly difficult to get a patch into
> > Python. Like threading, and the stdlib, this is a mixed
> > blessing: you certainly don't want every Joe Schmoe checking
> > in whatever crud he wants. However, the barrier is high enough
> > that I no longer have much interest in spending the time to
> > shepherd a patch through. Yes, this is probably all my fault
> > -- but I still hate it!
> >
> > FWIW, I have a related perception that we aren't getting new core
> > developers. These two problems are probably related: people don't get
> > patches processed and don't become core developers, and we don't have
> > enough core developers to process patches in a timely way.  And so
> > we're stuck.
> >
> > Any ideas for fixing this problem?
>
> A better patch-tracker, better procedures for reviewing patches surounding
> this new tracker, one or more proper dvcs's for people to work off of. I'm
> not sure about 'identifying core developers' as we're all volunteers, with
> dayjobs for the most part, and only a few people seem to care enough about
> python as a whole.

I don't think that that is true. I think a lot of people care, but many can't 
do anything about because the barrier to entry is too great.

> Putting the burden of patch review on the developers 
> that say they can cover it might easily burn them out. (I see Martin handle
> a lot of patches, for instance, and I would love to help him, but I just
> can't find the time to review the patches on subjects I know much about,
> let alone the rest of the patches.)
>
> While submitting patches is good, there's a lot more to it than just
> submitting the 5-line code change to submit a bug/feature, and reviewing
> takes a lot of time and effort.

So there is something wrong there as well.

> I don't think it's unreasonable to ask for 
> help from the submitters like we do, or ask them to write tests and docs
> and such.

Of course it's not unreasonable. I would expect to be told that a patch must 
have tests and docs before it will be finally accepted. However, before I add 
those things to the patch I would like some timely feedback from those with 
more experience that my patch is going in the right direction. I want 
somebody to give it a quick look, not a full blown review. The process needs 
to keep people involved in it - at the moment submitting a patch is 
fire-and-forget.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread Phil Thompson
On Tuesday 06 March 2007 5:42 am, Martin v. Löwis wrote:
> Phil Thompson schrieb:
> > 1. Don't suggest to people that, in order to get their patch reviewed,
> > they should review other patches. The level of knowledge required to put
> > together a patch is much less than that required to know if a patch is
> > the right one.
>
> People don't *have* to review patches. They just can do that if they
> want expedite review of their code.
>
> > 2. Publically identify the core developers and their areas of expertise
> > and responsibility (ie. which parts of the source tree they "own").
>
> I doubt this will help. Much of the code isn't owned by anybody
> specifically. Those parts that are owned typically find their patches
> reviewed and committed quickly (e.g. the tar file module, maintained by
> Lars Gustäbel).

Doesn't your last sentence completely contradict your first sentence?

> > 4. Acceptance by core developers that only half the "job" is developing
> > the core - the other half is mentoring potential future core developers.
>
> So what do you do with core developers that don't do their job? Fire them?

Of course not, but this is a cultural issue not a technical one. The first 
step in changing a culture is to change the expectations.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread Phil Thompson
On Tuesday 06 March 2007 5:49 am, Martin v. Löwis wrote:
> Phil Thompson schrieb:
> > I'm not sure what your point is. My point is that, if you want to
> > encourage people to become core developers, they have to have a method of
> > graduating through the ranks - learning (and being taught) as they go. To
> > place a very high obstacle in their way right at the start is completely
> > counter-productive.
>
> And please be assured that no such obstacle is in the submitters way.
> Most patches are accepted without the submitter actually reviewing any
> other patches.

I'm glad to hear it - but I'm talking about the perception, not the fact. When 
occasionally submitters ask if their patch is going to be included, they will 
usually get a response suggesting they review other patches. That will only 
strengthen the perception.

This discussion started because the feeling was expressed that it was 
difficult to get patches accepted and that new core developers were not being 
found. I would love to contribute more to the development of Python - I owe 
it a lot - but from where I stand (which is most definitely not where you 
stand) I can't see how to do that in a productive and rewarding way.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread Phil Thompson
On Tuesday 06 March 2007 6:00 am, Martin v. Löwis wrote:
> Phil Thompson schrieb:
> >>> Any ideas for fixing this problem?
> >>
> >> A better patch-tracker, better procedures for reviewing patches
> >> surounding this new tracker, one or more proper dvcs's for people to
> >> work off of. I'm not sure about 'identifying core developers' as we're
> >> all volunteers, with dayjobs for the most part, and only a few people
> >> seem to care enough about python as a whole.
> >
> > I don't think that that is true. I think a lot of people care, but many
> > can't do anything about because the barrier to entry is too great.
>
> He was talking about the committers specifically who don't care about
> Python as-a-whole, and I think this is true. But I also believe that
> many contributors don't "care" about Python as-a-whole, in the sense
> that they are uninterested in learning about implementation details of
> libraries they will never use. What they do care about is the problems
> they have, and they do contribute patches for them.
>
> >> While submitting patches is good, there's a lot more to it than just
> >> submitting the 5-line code change to submit a bug/feature, and reviewing
> >> takes a lot of time and effort.
> >
> > So there is something wrong there as well.
> >
> >> I don't think it's unreasonable to ask for
> >> help from the submitters like we do, or ask them to write tests and docs
> >> and such.
> >
> > Of course it's not unreasonable. I would expect to be told that a patch
> > must have tests and docs before it will be finally accepted. However,
> > before I add those things to the patch I would like some timely feedback
> > from those with more experience that my patch is going in the right
> > direction.
>
> This cannot work. It is very difficult to review a patch to fix a
> presumed bug if there is no test case. You might not be able to
> reproduce the patch without a test case at all - how could you then
> know whether the patch actually fixes the bug?

Please read what I said again. Yes, a patch must be reviewed before 
submission. Yes, a patch when submitted must include docs and test cases. I'm 
talking about the less formal process leading up to that point. The less 
formal process has a much lower barrier to entry, requires much less effort 
by the "reviewer", is the period during which the majority of the teaching 
happens, and will result in a better quality final patch that will require 
less effort to be put in to the final, formal review.

> So I really think patches should be formally complete before being
> submitted. This is an area were anybody can review: you don't need
> to be an expert to see that no test cases are contributed to a
> certain patch.
>
> If you really want to learn and help, review a few patches, to see
> what kinds of problems you detect, and then post your findings to
> python-dev. People then will comment on whether they agree with your
> review, and what additional changes they like to see.

Do you think this actually happens in practice? There is no point sticking to 
a process, however sensible, if it doesn't get used.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread Phil Thompson
On Tuesday 06 March 2007 6:15 am, Raymond Hettinger wrote:
> [Phil Thompson]
>
> > I think a lot of people care, but many can't
> > do anything about because the barrier to entry is too great.
>
> Do you mean commit priviledges?  ISTM, those tend to be
> handed out readily to people who assert that they have good use for them.
> Ask the Georg-bot how readily he was accepted and coached.  IMO,
> his acceptance was a model that all open source projects should aspire to.
>
> If you meant something else like knowing how to make a meaningful patch,
> then you've got a point.  It takes a while to learn your way around the
> source tree and to learn the inter-relationships between the moving parts.
> That is just the nature of the beast.

I meant the second. While it may be the nature that doesn't mean that the 
situation can't be improved.

> [MvL]
>
> >> While submitting patches is good, there's a lot more to it than just
> >> submitting the 5-line code change to submit a bug/feature, and reviewing
> >> takes a lot of time and effort.
>
> [Phil]
>
> > So there is something wrong there as well.
>
> I have not idea what you're getting at.Martin's comment seems
> accurate to me.  Unless it is a simple typo/doc fix, it takes
> some time to assess whether the bug is real (some things are bugs
> only in the eye of the submitter) and whether the given fix is the
> right thing to do.
>
> Of course, automatic acceptance of patches would be a crummy idea.
> There have been no shortage of patches complete with docs and tests
> that were simply not the right thing to do.

My point is simply that the effort required to review patches seems to be a 
problem. Perhaps the reasons for that need to be looked at and the process 
changed so that it is more effective. At the moment people just seem be 
saying "that's the way it is because that's the way it's got to be".

> [Phil]
>
> > The process needs
> > to keep people involved in it - at the moment submitting a patch is
> > fire-and-forget.
>
> Such is the nature of a system of volunteers.  If we had full-time people,
> it could be a different story.  IMO, given an 18 month release cycle,
> it is perfectly acceptable for a patch to sit for a while until someone
> with the relavant expertise can address it.  Even with a tests and docs,
> patch acceptance is far from automatic.  That being said, I think history
> has shown that important bugs get addressed and put into bug fix releases
> without much loss of time.  When Py2.5.1 goes out, I expect that all known,
> important bugs will have been addressed and that's not bad.

Then perhaps getting a full-time person should be taken seriously.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread Phil Thompson
On Tuesday 06 March 2007 1:42 pm, Jeremy Hylton wrote:
> On 3/6/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
> > Raymond Hettinger schrieb:
> > > [Phil Thompson]
> > >
> > >> I think a lot of people care, but many can't
> > >> do anything about because the barrier to entry is too great.
> > >
> > > Do you mean commit priviledges?  ISTM, those tend to be
> > > handed out readily to people who assert that they have good use for
> > > them. Ask the Georg-bot how readily he was accepted and coached.  IMO,
> > > his acceptance was a model that all open source projects should aspire
> > > to.
> >
> > Indeed. For me, it wasn't "hard" to get tracker rights. I reviewed some
> > patches, commented on bugs, posted suggestions to python-dev etc. When I
> > asked about tracker rights on python-dev, they were given to me.
> > Then, it wasn't "hard" to get commit rights. I contributed some stuff,
> > and after a while I asked about commit rights on python-dev, and they
> > were given to me on condition that I still let a core dev review inteded
> > changes.
> >
> > As far as I recall, there has been nearly no one who asked for commit
> > rights recently, so why complain that the entry barrier is too great?
> > Surely you cannot expect python-dev to got out and say "would you like to
> > have commit privileges?"...
>
> You can ask whether we should have a plan for increasing the number of
> developers, actively seeking out new developers, and mentoring people
> who express interest.  Would the code be better if we had more good
> developers working on it?  Would we get more bugs fixed and patches
> closed?  If so, it wouldn't hurt to have some deliberate strategy for
> bringing new developers in.  I can easily imagine someone spending a
> lot of time mentoring and a little time coding, but having a bigger
> impact that someone who only wrote code.

Thank you - that's exactly what I'm trying to say.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] C-API status of Python 3?

2008-03-02 Thread Phil Thompson
On Sunday 02 March 2008, Alex Martelli wrote:
> On Sun, Mar 2, 2008 at 10:39 AM, Gregory P. Smith <[EMAIL PROTECTED]> wrote:
> > On 3/2/08, Christian Heimes <[EMAIL PROTECTED]> wrote:
> > > Alex Martelli wrote:
> > > > Yep, but please do keep the PyUnicode for str and PyString for bytes
> > > > (as macros/synonnyms of PyStr and PyBytes if you want!-) to help the
> > > > task of porting existing extensions... the bytearray functions should
> > > > no doubt be PyBytearray, though.
> > >
> > > Yeah, we've already planed to keep PyUnicode as prefix for str type
> > > functions. It makes perfectly sense, not only from the historical point
> > > of view.
> > >
> > > But for PyString I planed to rename the prefix to PyBytes. In my
> > > opinion we are going to regret it, when we keep too many legacy names
> > > from 2.x. In order to make the migration process easier I can add a
> > > header file that provides PyString_* functions as aliases for PyBytes_*
> >
> > +1 on only doing this via a header that must be explicitly included by
> > modules wanting the compatibility names.
>
> OK, as long as it's also supplied (and presumably empty) for 2.6 -- my
> key concern is faciitating the maintenance of a single codebase for
> C-coded Python extensions that can be compiled for both 2.6 and 3.0.
> (I'm also thinking of SWIG and similar utilities, but those can
> probably best be tweaked to emit rather different C code for the two
> cases; still, that C code will also include some C snippets hand-coded
> by the extension author/maintainer, e.g. via SWIG typemaps &c, so
> easing the "single codebase" approach may help there too).
>
> I don't think we want to go the route of code translators/generators
> for C-coded Python extensions (the way we do for Python code via
> 2to3), and the fewer #if's and #define's C extension
> authors/maintainers are required to devise (in order to support both
> 2.6 and 3.0), the likelier it is that we'll see 3.0 support in popular
> C-coded Python extensions sooner rather than later.

Speaking for myself, this isn't going to make any difference as pre-2.6 
versions of Python still need to be supported.

More of a pain is if 2.6 introduces source level incompatibilities with 2.5 
(as 2.5 did with 2.4).

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Reminder: last alphas next Wednesday 07-May-2008

2008-05-02 Thread Phil Thompson
On Friday 02 May 2008, Nick Coghlan wrote:
> Jeroen Ruigrok van der Werven wrote:
> > -On [20080502 10:50], Steve Holden ([EMAIL PROTECTED]) wrote:
> >> Groan. Then everyone else realizes what a "great idea" this is, and we
> >> see ~/Perl/, ~/Ruby/, ~/C# (that'll screw the Microsoft users, a
> >> directory with a comment market in its name), ~/Lisp/ and the rest? I
> >> don't think people would thank us for that in the long term.
> >
> > I'm +1 on just using $HOME/.local, but otherwise $HOME/.python makes
> > sense too. $HOME/.python.d doesn't do it for me, too clunky (and hardly
> > used if I look at my .files in $HOME).
> >
> > But I agree with Steve that it should be a hidden directory.
>
> This sums up my opinion pretty well. Hidden by default, but easy to
> expose (e.g. via a local -> .local symlink) for the more experienced
> users that want it more easily accessible.

But you can't be serious about using such a generic word as "local" as the 
name??? At least include the letters "p" and "y" somewhere.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.0.1

2009-01-30 Thread Phil Thompson
On Fri, 30 Jan 2009 07:03:03 -0500, Steve Holden 
wrote:
> Antoine Pitrou wrote:
>> Raymond Hettinger  rcn.com> writes:
>>> * If you're thinking that shelves have very few users and that
>>>   3.0.0 has had few adopters, doesn't that mitigate the effects
>>>   of making a better format available in 3.0.1?  Wouldn't this
>>>   be the time to do it?
>> 
>> There was already another proposal for an sqlite-based dbm module, you
>> may
>> want to synchronize with it:
>> http://bugs.python.org/issue3783
>> 
>> As I see it, the problem with introducing it in 3.0.1 is that we would
be
>> rushing in a new piece of code without much review or polish.
> 
> Again
> 
>> Also, there are
>> only two release blockers left for 3.0.1, so we might just finish those
>> and
>> release, then concentrate on 3.1.
>> 
> Seems to me that every deviation from the policy introduced as a result
> for the True/False debacle leads to complications and problems. There's
> no point having a policy instigated for good reasons if we can ignore
> those reasons on a whim.
> 
> So to my mind, ignoring the policy *is* effectively declaring 3.0 to be,
> well, if not a dead parrot then at least a rushed release.
> 
> Most consistently missing from this picture has been effective
> communications (in both directions) with the user base. Consequently
> nobody knows whether specific features are in serious use, and nobody
> knows whether 3.0 is intended to be a stable base for production
> software or not. Ignoring users, and acting as though we know what they
> are doing and what they want, is not going to lead to better acceptance
> of future releases.

My 2 cents as a user...

I wouldn't consider v3.0.n (where n is small) for use in production. v3.1
however implies (to me at least) a level of quality where I would be
disappointed if it wasn't production ready.

Therefore I would suggest the main purpose of any v3.0.1 release is to make
sure that v3.1 is up to scratch.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] super_getattro() Behaviour

2005-04-13 Thread Phil Thompson
In PyQt, wrapped types implement lazy access to the type dictionary
through tp_getattro. If the normal attribute lookup fails, then private
tables are searched and the attribute (if found) is created on the fly and
returned. It is also put into the type dictionary so that it is found next
time through the normal lookup. This is done to speed up the import of,
and the memory consumed by, the qt module which contains thousands of
class methods.

This all works fine - except when super is used.

The implementation of super_getattro() doesn't use the normal attribute
lookup (ie. doesn't go via tp_getattro). Instead it walks the MRO
hierarchy itself and searches instance dictionaries explicitly. This means
that attributes that have not yet been referenced (ie. not yet been cached
in the type dictionary) will not be found.

Questions...

1. What is the reason why it doesn't go via tp_getattro? Bug or feature?

2. A possible workaround is to subvert the ma_lookup function of the type
dictionary after creating the type to do something similar to what my
tp_getattro function is doing. Are there any inherent problems with that?

3. Why, when creating a new type and eventually calling type_new() is a
copy of the dictionary passed in made? Why not take a reference to it?
This would allow a dict sub-class to be used as the type dictionary. I
could then implement a lazy-dict sub-class with the behaviour I need.

4. Am I missing a more correct/obvious technique? (There is no need to
support classic classes.)

Many thanks,
Phil

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] super_getattro() Behaviour

2005-04-14 Thread Phil Thompson
> "Phil Thompson" <[EMAIL PROTECTED]> writes:
>
>> In PyQt, wrapped types implement lazy access to the type dictionary
>> through tp_getattro. If the normal attribute lookup fails, then private
>> tables are searched and the attribute (if found) is created on the fly
>> and
>> returned. It is also put into the type dictionary so that it is found
>> next
>> time through the normal lookup. This is done to speed up the import of,
>> and the memory consumed by, the qt module which contains thousands of
>> class methods.
>>
>> This all works fine - except when super is used.
>>
>> The implementation of super_getattro() doesn't use the normal attribute
>> lookup (ie. doesn't go via tp_getattro). Instead it walks the MRO
>> hierarchy itself and searches instance dictionaries explicitly. This
>> means
>> that attributes that have not yet been referenced (ie. not yet been
>> cached
>> in the type dictionary) will not be found.
>>
>> Questions...
>>
>> 1. What is the reason why it doesn't go via tp_getattro?
>
> Because it wouldn't work if it did?  I'm not sure what you're
> suggesting here.

I'm asking for an explanation for the current implementation. Why wouldn't
it work if it got the attribute via tp_getattro?

>> 2. A possible workaround is to subvert the ma_lookup function of the
>> type
>> dictionary after creating the type to do something similar to what my
>> tp_getattro function is doing.
>
> Eek!

Agreed.

>> Are there any inherent problems with that?
>
> Well, I think the layout of dictionaries is fiercely private.  IIRC,
> the only reason it's in a public header is to allow some optimzations
> in ceval.c (though this isn't at all obvious from the headers, so
> maybe I'm mistaken).

Yes, having looked in more detail at the dict implementation I really
don't want to go there.

>> 3. Why, when creating a new type and eventually calling type_new() is a
>> copy of the dictionary passed in made?
>
> I think this is to prevent changes to tp_dict behind the type's back.
> It's important to keep the dict and the slots in sync.
>
>> Why not take a reference to it?  This would allow a dict sub-class
>> to be used as the type dictionary. I could then implement a
>> lazy-dict sub-class with the behaviour I need.
>
> Well, not really, because super_getattro uses PyDict_GetItem, which
> doesn't respect subclasses...

I suppose I was hoping for more C++ like behaviour.

>> 4. Am I missing a more correct/obvious technique? (There is no need to
>> support classic classes.)
>
> Hum, I can't think of one, I'm afraid.
>
> There has been some vague talk of having a tp_lookup slot in
> typeobjects, so
>
> PyDict_GetItem(t->tp_dict, x);
>
> would become
>
> t->tp_lookup(x);
>
> (well, ish, it might make more sense to only do that if the dict
> lookup fails).

That would be perfect. I can't Google any reference to a discussion - can
you point me at something?

> For now, not being lazy seems your only option :-/ (it's what PyObjC
> does).

Not practical I'm afraid. I think I can only document that super doesn't
work in this context.

Thanks,
Phil

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] super_getattro() Behaviour

2005-04-14 Thread Phil Thompson
 4. Am I missing a more correct/obvious technique? (There is no need to
 support classic classes.)
>>>
>>> Hum, I can't think of one, I'm afraid.
>>>
>>> There has been some vague talk of having a tp_lookup slot in
>>> typeobjects, so
>>>
>>> PyDict_GetItem(t->tp_dict, x);
>>>
>>> would become
>>>
>>> t->tp_lookup(x);
>>>
>>> (well, ish, it might make more sense to only do that if the dict
>>> lookup fails).
>>
>> That would be perfect. I can't Google any reference to a discussion -
>> can
>> you point me at something?
>
> Well, most of the discussion so far has been in my head :)
>
> There was a little talk of it in the thread "can we stop pretending
> _PyType_Lookup is internal" here and possibly on pyobjc-dev around the
> same time.
>
> I'm not that likely to work on it soon -- I have enough moderately
> complex patches to core Python I'm persuading people to think about
> :-/.

Anything I can do to help push it along?

Phil

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Switch to MS VC++ 2005 ?!

2006-02-27 Thread Phil Thompson
On Monday 27 February 2006 5:51 pm, Alex Martelli wrote:
> On 2/27/06, M.-A. Lemburg <[EMAIL PROTECTED]> wrote:
> > Microsoft has recently released their express version of the Visual C++.
> > Given that this version is free for everyone, wouldn't it make sense
> > to ship Python 2.5 compiled with this version ?!
> >
> > http://msdn.microsoft.com/vstudio/express/default.aspx
> >
> > I suppose this would make compiling extensions easier for people
> > who don't have a standard VC++ .NET installed.
>
> It would sure be nice for people like me with "occasional dabbler in
> Windows" status, so, selfishly, I'd be all in favor.  However...:
>
> What I hear from the rumor mill (not perhaps a reliable source) is a
> bit discouraging about the stability of VS2005 (e.g. internal
> rebellion at MS in which groups which need to ship a lot of code
> pushed back against any attempt to make them use VS2005, and managed
> to win the internal fight and stick with VS2003), but I don't know if
> any such worry applies to something as simple as the mere compilation
> of C code...

...but some extension modules are 500,000 lines of C++.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The "i" string-prefix: I18n'ed strings

2006-04-08 Thread Phil Thompson
On Saturday 08 April 2006 1:05 am, Barry Warsaw wrote:
> On Sat, 2006-04-08 at 00:45 +0200, "Martin v. Löwis" wrote:
> > *Never* try to do i18n that way. Don't combine fragments through
> > concatenation. Instead, always use placeholders.
>
> Martin is of course absolutely right!
>
> > If you have many fragments, the translator gets the challenge of
> > translating "dollars". Now, this might need to be translated differently
> > in different contexts (and perhaps even depending on the value of
> > balance); the translator must always get the complete message
> > as a single piece.
>
> Plus, if you have multiple placeholders, the order may change in some
> translations.

I haven't been following this discussion, so something similar may already 
have been mentioned.

The way Qt handles this is to use %1, %2 etc as placeholders. The numbers 
refer to the arguments (the order of which is obviously fixed by the 
programmer). The translator determines the order in which the placeholders 
appear in the format string.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Inconsistent Use of Buffer Interface in stringobject.c

2005-10-24 Thread Phil Thompson
I'm implementing a string-like object in an extension module and trying to 
make it as interoperable with the standard string object as possible. To do 
this I'm implementing the relevant slots and the buffer interface. For most 
things this is fine, but there are a small number of methods in 
stringobject.c that don't use the buffer interface - and I don't understand 
why.

Specifically...

string_contains() doesn't which means that...

MyString("foo") in "foobar"

...doesn't work.

s.join(sequence) only allows sequence to contain string or unicode objects.

s.strip([chars]) only allows chars to be a string or unicode object. Same for 
lstrip() and rstrip().

s.ljust(width[, fillchar]) only allows fillchar to be a string object (not 
even a unicode object). Same for rjust() and center().

Other methods happily allow types that support the buffer interface as well as 
string and unicode objects.

I'm happy to submit a patch - I just wanted to make sure that this behaviour 
wasn't intentional for some reason.

Thanks,
Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent Use of Buffer Interface in stringobject.c

2005-10-24 Thread Phil Thompson
On Monday 24 October 2005 7:39 pm, Guido van Rossum wrote:
> On 10/24/05, M.-A. Lemburg <[EMAIL PROTECTED]> wrote:
> > Guido van Rossum wrote:
> > > A concern I'd have with fixing this is that Unicode objects also
> > > support the buffer API. In any situation where either str or unicode
> > > is accepted I'd be reluctant to guess whether a buffer object was
> > > meant to be str-like or Unicode-like. I think this covers all the
> > > cases you mention here.
> >
> > This situation is a little better than that: the buffer
> > interface has a slot called getcharbuffer which is what
> > the string methods use in case they find that a string
> > argument is not of type str or unicode.
>
> I stand corrected!
>
> > As first step, I'd suggest to implement the gatcharbuffer
> > slot. That will already go a long way.
>
> Phil, if anything still doesn't work after doing what Marc-Andre says,
> those would be good candidates for fixes!

I have implemented getcharbuffer - I was highlighting those methods where the 
getcharbuffer implementation was ignored.

I'll put a patch together.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent Use of Buffer Interface in stringobject.c

2005-10-25 Thread Phil Thompson
On Monday 24 October 2005 7:39 pm, Guido van Rossum wrote:
> On 10/24/05, M.-A. Lemburg <[EMAIL PROTECTED]> wrote:
> > Guido van Rossum wrote:
> > > A concern I'd have with fixing this is that Unicode objects also
> > > support the buffer API. In any situation where either str or unicode
> > > is accepted I'd be reluctant to guess whether a buffer object was
> > > meant to be str-like or Unicode-like. I think this covers all the
> > > cases you mention here.
> >
> > This situation is a little better than that: the buffer
> > interface has a slot called getcharbuffer which is what
> > the string methods use in case they find that a string
> > argument is not of type str or unicode.
>
> I stand corrected!
>
> > As first step, I'd suggest to implement the gatcharbuffer
> > slot. That will already go a long way.
>
> Phil, if anything still doesn't work after doing what Marc-Andre says,
> those would be good candidates for fixes!

The patch is now on SF, #1337876.

Phil
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-26 Thread Phil Thompson via Python-Dev

On 26/12/2020 10:52, Ronald Oussoren via Python-Dev wrote:
On 25 Dec 2020, at 23:03, Nelson, Karl E. via Python-Dev 
 wrote:


I was directed to post this request to the general Python development 
community so hopefully this is on topic.


One of the weaknesses of the PyUnicode implementation is that the type 
is concrete and there is no option for an abstract proxy string to a 
foreign source.  This is an issue for an API like JPype in which 
java.lang.Strings are passed back from Java.   Ideally these would be 
a type derived from the Unicode type str, but that requires 
transferring the memory immediately from Java to Python even when that 
handle is large and will never be accessed from within Python.  For 
certain operations like XML parsing this can be prohibitable, so 
instead of returning a str we return a JString.   (There is a separate 
issue that Java method names and Python method names conflict so 
direct inheritance creates some problems.)


The JString type can of course be transferred to Python space at any 
time as both Python Unicode and Java string objects are immutable.  
However the CPython API which takes strings only accepts the Unicode 
type objects which have a concrete implementation.  It is possible to 
extend strings, but those extensions do not allow for proxing as far 
as I can tell.  Thus there is no option currently to proxy to a string 
representation in another language.  The concept of the using the duck 
type ``__str__`` method is insufficient as this indices that an object 
can become a string, rather than “this object is effectively a string” 
for the purposes of the CPython API.


One way to address this is to use currently outdated copy of READY to 
extend Unicode objects to other languages.  A class like JString would 
be an unready Unicode object which when READY is called transfers the 
memory from Java, sets up the flags and sets up a pointer to the code 
point representation.  Unfortunately the READY concept is scheduled 
for removal and thus the chance to address the needs for proxying a 
Unicode to another languages representation may be limited. There may 
be other methods to accomplish this without using the concept of 
READY.  So long as access to the code points go through the Unicode 
API and the Unicode object can be extended such that the actual code 
points may be located outside of the Unicode object then a proxy can 
still be achieved if there are hooks in it to decided when a transfer 
should be performed.   Generally the transfer request only needs to 
happen once  but the key issue being that the number of code points 
(nor the kind of points) will not be known until the memory is 
transferred.


Java has much the same problem.   Although they defined an interface 
class “java.lang.CharacterArray” the actually “java.lang.String” class 
is concrete and almost all API methods take a String rather than the 
base interface even when the base interface would have been adequate.  
Thus just like Python has difficulty treating a foreign string class 
as it would a native one, Java cannot treat a Python string as native 
one as well.  So Python strings get represented as CharacterArray type 
which effectively limits it use greatly.


Summary:

A String proxy would need the address of the memory in the “wstr” slot 
though the code points may be char[], wchar[] or int[] depending the 
representation in the proxy.
API calls to interpret the data would need to check to see if the data 
is transferred first, if not it would call the proxy dependent 
transfer method which is responsible for creating a block of code 
points and set up flags (kind, ascii, ready, and compact).
The memory block allocated would need to call the proxy dependent 
destructor to clean up with the string is done.
It is not clear if this would have impact on performance.   Python 
already has the concept of a string which needs actions before it can 
be accessed, but this is scheduled for removal.


Are there any plans currently to address the concept of a proxy string 
in PyUnicode API?


I have a similar problem in PyObjC which proxies Objective-C classes
to Python (and the other way around). For interop with Python code I
proxy Objective-C strings using a subclass of str() that is eagerly
populated even if, as you mention as well, a lot of these proxy object
are never used in a context where the str() representation is
important.  A complicating factor for me is that Objective-C strings
are, in general, mutable which can lead to interesting behaviour.
Another disadvantage of subclassing str() for foreign string types is
that this removes the proxy class from their logical location in the
class hierarchy (in my case the proxy type is not a subclass of the
proxy type for NSObject, even though all Objective-C classes inherit
from NSObject).

I primarily chose to subclass the str type because that enables using
the NSString proxy type with C functions/methods that expect a string
argumen

[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Phil Thompson via Python-Dev

On 28/12/2020 02:07, Inada Naoki wrote:

On Sun, Dec 27, 2020 at 8:20 PM Ronald Oussoren via Python-Dev
 wrote:


On 26 Dec 2020, at 18:43, Guido van Rossum  wrote:

On Sat, Dec 26, 2020 at 3:54 AM Phil Thompson via Python-Dev 
 wrote:




That wouldn’t be a solution for code using the PyUnicode_* APIs of 
course, nor Python code explicitly checking for the str type.


In the end a new string “kind” (next to the 1, 2 and 4 byte variants) 
where callbacks are used to provide data might be the most pragmatic.  
That will still break code peaking directly in the the PyUnicodeObject 
struct, but anyone doing that should know that that is not a stable 
API.




I had a similar idea for lazy loading or lazy decoding of Unicode 
objects.

But I have rejected the idea and proposed to deprecate
PyUnicode_READY() because of the balance between merits and
complexity:

* Simplifying the Unicode object may introduce more room for
optimization because Unicode is the essential type for Python. Since
Python is a dynamic language, a huge amount of str comparison happened
in runtime compared with static languages like Java and Rust.
* Third parties may forget to check PyErr_Occurred() after API like
PyUnicode_Contains or PyUnicode_Compare when the author knows all
operands are exact Unicode type.

Additionally, if we introduce the customizable lazy str object, it's
very easy to release GIL during basic Unicode operations. Many third
parties may assume PyUnicode_Compare doesn't release GIL if both
operands are Unicode objects. It will produce bugs hard to find and
reproduce.


I would have no problem with the protocol stating that the GIL must not 
be released by "foreign" unicode implementations.



So I'm +1 to make Unicode simple by removing PyUnicode_READY(), and -1
to make Unicode complicated by adding customizable callback for lazy
population.

Anyway, I am OK to un-deprecate PyUnicode_READY() and make it no-op
macro since Python 3.12.
But I don't know how many third-parties use it properly, because
legacy Unicode objects are very rare already.


For me lazy population might not be enough (as I'm not sure precisely 
what you mean by it). I would like to be able to use my foreign unicode 
thing to be used as the storage.


For example (where text() returns a unicode object with a foreign 
kind)...


some_text = an_editor.text()
more_text = another_editor.text()

if some_text == more_text:
print("The text is the same")

...would not involve any conversions at all. The following would require 
a conversion...


if some_text == "literal text":

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZSPNNLM25FRIEK2KYN5JORIR76PZH22N/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Enhancement request for PyUnicode proxies

2020-12-28 Thread Phil Thompson via Python-Dev

On 28/12/2020 11:27, Inada Naoki wrote:

On Mon, Dec 28, 2020 at 7:22 PM Phil Thompson
 wrote:


> So I'm +1 to make Unicode simple by removing PyUnicode_READY(), and -1
> to make Unicode complicated by adding customizable callback for lazy
> population.
>
> Anyway, I am OK to un-deprecate PyUnicode_READY() and make it no-op
> macro since Python 3.12.
> But I don't know how many third-parties use it properly, because
> legacy Unicode objects are very rare already.

For me lazy population might not be enough (as I'm not sure precisely
what you mean by it). I would like to be able to use my foreign 
unicode

thing to be used as the storage.

For example (where text() returns a unicode object with a foreign
kind)...

some_text = an_editor.text()
more_text = another_editor.text()

if some_text == more_text:
 print("The text is the same")

...would not involve any conversions at all.


So you mean custom internal representation of exact Unicode object?

Then, I am more strong -1, sorry.
I can not believe the merits of it is bigger than the costs of its 
complexity.

If 3rd party wants to use completely different internal
representation, it must not be a unicode object at all.


I would have thought that an object was defined by its behaviour rather 
than by any particular implementation detail. However I completely 
understand the desire to avoid additional complexity of the 
implementation.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/D4U7TWKNP347HG37H56EPVJHUNRET7QX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Understanding "is not safe" in typeobject.c

2021-02-01 Thread Phil Thompson via Python-Dev

Hi,

I'm trying to understand the purpose of the check in tp_new_wrapper() of 
typeobject.c that results in the "is not safe" exception.


I have the following class hierarchy...

B -> A -> object

...where B and A are implemented in C. Class A has an implementation of 
tp_new which does a few context-specific checks before calling 
PyBaseObject_Type.tp_new() directly to actually create the object. This 
works fine.


However I want to allow class B to be used with a Python mixin. A's 
tp_new() then has to do something similar to super().__new__(). I have 
tried to implement this by locating the type object after A in B's MRO, 
getting it's '__new__' attribute and calling it (using PyObject_Call()) 
with B passed as the only argument. However I then get the "is not safe" 
exception, specifically...


TypeError: object.__new__(B) is not safe, use B.__new__()

I take the same approach for __init__() and that works fine.

If I comment out the check in tp_new_wrapper() then everything works 
fine.


So, am I doing something unsafe? If so, what?

Or, is the check at fault in not allowing the case of a C extension type 
with its own tp_new?


Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HRGDEMURCJ5DSNEPMQPQR3R7VVDFA4ZX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 01/02/2021 23:50, Greg Ewing wrote:

On 2/02/21 12:13 am, Phil Thompson via Python-Dev wrote:

TypeError: object.__new__(B) is not safe, use B.__new__()


It's not safe because object.__new__ doesn't know about any
C-level initialisation that A or B need.


But A.__new__ is calling object.__new__ and so can take care of its own 
needs after the latter returns.



At the C level, there is always a *single* inheritance hierarchy.


Why?


The right thing is for B's tp_new to directly call A's tp_new,
which calls object's tp_new.


I want my C-implemented class's __new__ to support cooperative 
multi-inheritance so my A class cannot assume that object.__new__ is the 
next in the MRO.


I did try to call the next-in-MRO's tp_new directly (rather that calling 
it's __new__ attribute) but that gave me recursion errors.



Don't worry about Python-level multiple inheritance; the
interpreter won't let you create an inheritance structure
that would mess this up.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GZ2RF7TJ6MXDODPWCJB3PDC2Z3VDSQIQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 01/02/2021 19:06, Guido van Rossum wrote:

That code is quite old. This comment tries to explain it:
```
/* Check that the use doesn't do something silly and unsafe like
   object.__new__(dict). To do this, we check that the
most derived base that's not a heap type is this type. */
```


I understand what it is checking, but I don't understand why it is 
"silly and unsafe".


I think you may have to special-case this and arrange for B.__new__() 
to be

called, like it or not.


But it's already been called. The check fails when trying to 
subsequently call object.__new__().


(If you want us to change the code, please file a bpo bug report. I 
know

that's no fun, but it's the way to get the right people involved.)


Happy to do that but I first wanted to check if I was doing something 
"silly" - I'm still not sure.


Phil


On Mon, Feb 1, 2021 at 3:27 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:


Hi,

I'm trying to understand the purpose of the check in tp_new_wrapper() 
of

typeobject.c that results in the "is not safe" exception.

I have the following class hierarchy...

B -> A -> object

...where B and A are implemented in C. Class A has an implementation 
of

tp_new which does a few context-specific checks before calling
PyBaseObject_Type.tp_new() directly to actually create the object. 
This

works fine.

However I want to allow class B to be used with a Python mixin. A's
tp_new() then has to do something similar to super().__new__(). I have
tried to implement this by locating the type object after A in B's 
MRO,
getting it's '__new__' attribute and calling it (using 
PyObject_Call())
with B passed as the only argument. However I then get the "is not 
safe"

exception, specifically...

TypeError: object.__new__(B) is not safe, use B.__new__()

I take the same approach for __init__() and that works fine.

If I comment out the check in tp_new_wrapper() then everything works
fine.

So, am I doing something unsafe? If so, what?

Or, is the check at fault in not allowing the case of a C extension 
type

with its own tp_new?

Thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/archives/list/python-dev@python.org/message/HRGDEMURCJ5DSNEPMQPQR3R7VVDFA4ZX/
Code of Conduct: http://python.org/psf/codeofconduct/


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZNJK6BJLXCMOOZNEDGNZZKT2YG4XUV57/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-02 Thread Phil Thompson via Python-Dev

On 02/02/2021 14:18, Greg Ewing wrote:

On 3/02/21 12:07 am, Phil Thompson wrote:

On 01/02/2021 23:50, Greg Ewing wrote:

At the C level, there is always a *single* inheritance hierarchy.


Why?


Because a C struct can only extend one other C struct.


Yes - I misunderstood what you meant by "at the C level".

I want my C-implemented class's __new__ to support cooperative 
multi-inheritance


I don't think this is possible. Here is what the C API docs have to
say about the matter:

---

Note

If you are creating a co-operative tp_new (one that calls a base
type’s tp_new or __new__()), you must not try to determine what method
to call using method resolution order at runtime. Always statically
determine what type you are going to call, and call its tp_new
directly, or via type->tp_base->tp_new. If you do not do this, Python
subclasses of your type that also inherit from other Python-defined
classes may not work correctly. (Specifically, you may not be able to
create instances of such subclasses without getting a TypeError.)

---

(Source: https://docs.python.org/3.5/extending/newtypes.html)

This doesn't mean that your type can't be used in multiple inheritance,
just that __new__ methods in particular can't be cooperative.


Thanks - that's fairly definitive, although I don't really understand 
why __new__ has this particular requirement.


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FWSIZUAGD4QRZQ2ZDKLE7MP4P76EIMKL/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Understanding "is not safe" in typeobject.c

2021-02-03 Thread Phil Thompson via Python-Dev

On 02/02/2021 23:08, Greg Ewing wrote:

On 3/02/21 4:52 am, Phil Thompson wrote:

Thanks - that's fairly definitive, although I don't really understand 
why __new__ has this particular requirement.


The job of tp_new is to initialise the C struct. To do this,
it first has to initialise the fields of the struct it
inherits from, then initialise any fields of its own that
it adds, in that order.


Understood.


Initialising the inherited fields must be done by calling
the tp_new for the struct that it inherits from. You don't
want to call the tp_new of some other class that might have
got inserted into the MRO, because you have no idea what
kind of C struct it expects to get.


I had assumed that some other magic in typeobject.c (eg. conflicting 
meta-classes) would have raised an exception before getting to this 
stage if there was a conflict.



Cooperative calling is a nice idea, but it requires rather
special conditions to make it work. All the methods must
have exactly the same signature, and it mustn't matter what
order they're called in. Those conditions don't apply to
__new__, especially at the C level where everything is much
more strict type-wise.


Thanks for the explanation.

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S5KRTD7M73SMBDADMMP5XM5CPT3BLGLD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-09-26 Thread Phil Thompson via Python-Dev

On 26/09/2021 05:21, Steven D'Aprano wrote:

[snip]


As for the C-API... Python is 30 years old. Has it ever had a stable
C-API before now? Hasn't it *always* been the case that C packages have
targetted a single version and need to be rebuilt from source on every
release?


No.


These are not rhetorical questions, I genuinely do not know. I *think*
that there was an attempt to make a stable C API back in 3.2 days:

https://www.python.org/dev/peps/pep-0384/

but I don't know what difference it has made to extension writers in
practice. From your description, it sounds like perhaps not as big a
difference as we would have liked.

Maybe extension writers are not using the stable C API? Is that even
possible? Please excuse my ignorance.


PyQt has used the stable ABI for many years. The main reason for using 
it is to reduce the number of wheels. The PyQt ecosystem currently 
contains 15 PyPI projects across 4 platforms supporting 5 Python 
versions (including v3.10). Without the stable ABI a new release would 
require 300 wheels. With the stable ABI it is a more manageable 60 
wheels.


However the stable ABI is still a second class citizen as it is still 
not possible (AFAIK) to specify a wheel name that doesn't need to 
explicitly include each supported Python version (rather than a minimum 
stable ABI version). In other words it doesn't solve the OP's concern 
about unmaintained older packages being able to be installed in newer 
versions of Python (even though those packages had been explicitly 
designed to do so).


Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RPDUNMG6RS4FBG6GODZDZ4DCB252N4VP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-09-28 Thread Phil Thompson via Python-Dev

On 27/09/2021 21:53, Brett Cannon wrote:

On Sun, Sep 26, 2021 at 3:51 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:


On 26/09/2021 05:21, Steven D'Aprano wrote:

[snip]





> These are not rhetorical questions, I genuinely do not know. I *think*
> that there was an attempt to make a stable C API back in 3.2 days:
>
> https://www.python.org/dev/peps/pep-0384/
>
> but I don't know what difference it has made to extension writers in
> practice. From your description, it sounds like perhaps not as big a
> difference as we would have liked.
>
> Maybe extension writers are not using the stable C API? Is that even
> possible? Please excuse my ignorance.

PyQt has used the stable ABI for many years. The main reason for using
it is to reduce the number of wheels. The PyQt ecosystem currently
contains 15 PyPI projects across 4 platforms supporting 5 Python
versions (including v3.10). Without the stable ABI a new release would
require 300 wheels. With the stable ABI it is a more manageable 60
wheels.

However the stable ABI is still a second class citizen as it is still
not possible (AFAIK) to specify a wheel name that doesn't need to
explicitly include each supported Python version (rather than a 
minimum

stable ABI version).



Actually you can do this. The list of compatible wheels for a platform
starts at CPython 3.2 when the stable ABI was introduced and goes 
forward

to the version of Python you are running. So you can build a wheel file
that targets the oldest version of CPython that you are targeting and 
its

version of the stable ABI and it is considered forward compatible. See
`python -m pip debug --verbose` for the complete list of wheel tags 
that

are supported for an interpreter.


Logical and it works.

Many thanks,
Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/22MU4QKR46SMFQWQFPWUIWM76JYJMJ3L/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Worried about Python release schedule and lack of stable C-API

2021-10-05 Thread Phil Thompson via Python-Dev

On 05/10/2021 07:59, Nick Coghlan wrote:

On Tue, 28 Sep 2021, 6:55 am Brett Cannon,  wrote:




On Sun, Sep 26, 2021 at 3:51 AM Phil Thompson via Python-Dev <
python-dev@python.org> wrote:



However the stable ABI is still a second class citizen as it is still
not possible (AFAIK) to specify a wheel name that doesn't need to
explicitly include each supported Python version (rather than a 
minimum

stable ABI version).



Actually you can do this. The list of compatible wheels for a platform
starts at CPython 3.2 when the stable ABI was introduced and goes 
forward
to the version of Python you are running. So you can build a wheel 
file
that targets the oldest version of CPython that you are targeting and 
its

version of the stable ABI and it is considered forward compatible. See
`python -m pip debug --verbose` for the complete list of wheel tags 
that

are supported for an interpreter.



I think Phil's point is a build side one: as far as I know, the process 
for
getting one of those more generic file names is still to build a wheel 
with

an overly precise name for the stable ABI declarations used, and then
rename it.

The correspondence between "I used these stable ABI declarations in my
module build" and "I can use this more broadly accepted wheel name" is
currently obscure enough that I couldn't tell you off the top of my 
head
how to do it, and I contributed to the design of both sides of the 
equation.


Actually improving the build ergonomics would be hard (and outside
CPython's own scope), but offering a table in the stable ABI docs 
giving

suggested wheel tags for different stable ABI declarations should be
feasible, and would be useful to both folks renaming already built 
wheels

and anyone working on improving the build automation tools.


Actually I was able to do what I wanted without renaming wheels...

Specify 'py_limited_api=True' as an argument to Extension() (using 
setuptools v57.0.0).


Specify...

[bdist_wheel]
py_limited_api = cp36

...in setup.cfg (using wheel v0.34.2).

The resulting wheel has a Python tag of 'cp36' and an ABI tag of 'abi3' 
for all platforms, which is interpreted by the current version of pip 
exactly as I want.


I'm not sure if this is documented anywhere.

Phil
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XVBN3OWN5TAYAKTUYI6MEXATX3I62ZEZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [SPAM] Re: Switching to Discourse

2022-07-15 Thread Phil Thompson via Python-Dev

On 15/07/2022 16:09, Rob Boehne via Python-Dev wrote:

100% agree – dealing with 5 or more platforms for discussion groups is
a nightmare, and I tend not to follow any of them as closely for that
reason.


I agree. I don't mind having to use Discourse if I want to take part in 
a discussion but 99% of the time I just want to keep up to date with 
what is going on. In that case I want the information to come to me - I 
don't want to have to hunt for it. Can there be an RSS feed for 
everything, not just PEPs?


Phil


From: Skip Montanaro 
Date: Friday, July 15, 2022 at 9:26 AM
To: Petr Viktorin 
Cc: python-dev@python.org 
Subject: [SPAM] [Python-Dev] Re: Switching to Discourse
The
discuss.python.org
experiment has been going on for quite a while,
and while the platform is not without its issues, we consider it a
success. The Core Development category is busier than python-dev.
According to staff,
discuss.python.org
is much easier to moderate.. If
you're following python-dev but not
discuss.python.org,
you're missing out.

Personally, I think you are focused too narrowly and aren't seeing the
forest for the trees. Email protocols were long ago standardized. As a
result, people can use any of a large number of applications to read
and organize their email. To my knowledge, there is no standardization
amongst the various forum tools out there. I'm not suggesting discuss
is necessarily better or worse than other (often not open source)
forum tools, but each one implements its own walled garden. I'm
referring more broadly than just Python, or even Python development,
though even within the Python community it's now difficult to
manage/monitor all the various discussion sources (email, discuss,
GitHub, Stack Overflow, ...)

Get off my lawn! ;-)

Skip, kinda glad he's retired now...


___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/archives/list/python-dev@python.org/message/5R376DBMGYMJCJTXCZPNRUBNUPV5OSAJ/
Code of Conduct: http://python.org/psf/codeofconduct/

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PZ246BKJSWB3AQZSYMWUTX35RMWCPPQ6/
Code of Conduct: http://python.org/psf/codeofconduct/