[Python-Dev] Re: Python 3.8 problem with PySide

2019-12-08 Thread Nick Coghlan
On Fri., 6 Dec. 2019, 3:31 am Christian Tismer, 
wrote:

> Hi guys,
>
> during the last few weeks I have been struggling quite much
> in order to make PySide run with Python 3.8 at all.
>
> The expected problems were refcounting leaks due to changed
> handling of heaptypes. But in fact, the runtime behavior was
> much worse, because I always got negative refcounts!
>
> After exhaustive searching through the different 3.8 commits, I could
> isolate the three problems with logarithmic search.
>
> The hard problem was this:
> Whenever PySide creates a new type, it crashes in PyType_Ready.
> The reason is the existence of the Py_TPFLAGS_METHOD_DESCRIPTOR
> flag.
> During the PyType_Ready call, the function mro() is called.
> This mro() call results in a negative refcount, because something
> behaves differently since this flag is set by default in mro().
>
> When I patched this flag away during the type_new call, everything
> worked ok. I don't understand why this problem affects PySide
> at all. Here is the code that would normally be only the newType line:
>
>
> // PYSIDE-939: This is a temporary patch that circumvents the problem
> // with Py_TPFLAGS_METHOD_DESCRIPTOR until this is finally solved.
> PyObject *ob_PyType_Type = reinterpret_cast(&PyType_Type);
> PyObject *mro = PyObject_GetAttr(ob_PyType_Type,
> Shiboken::PyName::mro());
> auto hold = Py_TYPE(mro)->tp_flags;
> Py_TYPE(mro)->tp_flags &= ~Py_TPFLAGS_METHOD_DESCRIPTOR;
> auto *newType = reinterpret_cast(type_new(metatype,
> args, kwds));
> Py_TYPE(mro)->tp_flags = hold;
>

Isn't this manipulating the flags in the tuple type, rather than anything
on a custom object? Or is "mro" a custom object rather than an MRO tuple?

If anything, given the combination of factors required to reproduce the
problem, I would guess that there might be a ref counting problem in the
__set_owner__ invocations when called on a new type rather than a regular
instance, and that was somehow affected by the change to increment the type
refcount in PyObject_Init rather than PyType_GenericAlloc.

Cheers,
Nick.


>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/K5NJ47FWU4WGJRVD7VNQZGYVU5T7NQTH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-08 Thread Nick Coghlan
On Sat., 7 Dec. 2019, 2:08 am Victor Stinner,  wrote:

> Le ven. 6 déc. 2019 à 16:00, Guido van Rossum  a écrit :
> > Let's try to avoid having PEP discussions in the peps tracker, period.
> That repo's tracker is only meant to handle markup and grammar.
>
> I recall that some PEPs have been discussed in length in GitHub PRs.
> But I'm fine with keeping the discussion on mailing lists.


The line by line comment support and ability to accept PRs from others is
handy sometimes.

For pre-PEPs, the approach I now like is to make a local PR in my *fork* of
the PEPs repo. Then I'll only change it into a PR against the main repo
when it's time to assign a PEP number.

Cheers,
Nick.

Whatever
> works :-)
>
> Victor
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/RVPMP5XUE4Y4KXWKR3XU37DXRPOM25IC/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LCQ4L73EETAC7QYHHJ5VSAMNVLJFJUNC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-08 Thread Paul Moore
On Sun, 8 Dec 2019 at 09:00, Nick Coghlan  wrote:
>
> On Sat., 7 Dec. 2019, 2:08 am Victor Stinner,  wrote:
>>
>> Le ven. 6 déc. 2019 à 16:00, Guido van Rossum  a écrit :
>> > Let's try to avoid having PEP discussions in the peps tracker, period. 
>> > That repo's tracker is only meant to handle markup and grammar.
>>
>> I recall that some PEPs have been discussed in length in GitHub PRs.
>> But I'm fine with keeping the discussion on mailing lists.
>
>
> The line by line comment support and ability to accept PRs from others is 
> handy sometimes.

The lack of context in github notification emails for comments means
that I generally delete such notifications unread unless they are
basically trivial, or for a PR that I know I have a strong interest in
already.

> For pre-PEPs, the approach I now like is to make a local PR in my *fork* of 
> the PEPs repo. Then I'll only change it into a PR against the main repo when 
> it's time to assign a PEP number.

Using PRs to manage the development of a PEP, whether leading up to
submission or after initial submission, is fine. But I strongly prefer
discussions on the content of a PEP to be handled in the actual
discussion forum (mailing lists or Discourse).Too many people with an
interest in the subject are likely to miss discussions happening on
PRs.

Paul
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JO75DRVEK325GI27GK4SMBNADA4BBQLC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Handling cross-distro variations when running autoreconf?

2019-12-08 Thread Nick Coghlan
Hi folks,

While reviewing https://github.com/python/cpython/pull/17303/files, I
noticed that the configure script update removed the options for
`--runstatedir`. Those options appear to come from a Debian patch that
other distros don't yet have:
https://sources.debian.org/patches/autoconf/2.69-11/add-runstatedir.patch/

Since I use Fedora, running autoreconf locally on the review branch
didn't add those options back, but did add in various macros related
to Fedora's modular build system (we don't use those macros explicitly
ourselves, but presumably some of the m4 macros we do use include them
in their expansions when run on Fedora systems, so aclocal picked them
up).

Does anyone have any recommendations for dealing with this? My current
plan is to revert back to the configure script from master, run
autoreconf, and then use `git add -p` to only add in the desired
changes, leaving everything else as it was on master.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/L3EBGBFEE5WHSHZOWKUWX2NR4RZOWLMK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Rejecting PEP 606 and 608

2019-12-08 Thread Nick Coghlan
On Sun, 8 Dec 2019 at 20:39, Paul Moore  wrote:
> On Sun, 8 Dec 2019 at 09:00, Nick Coghlan  wrote:
> > For pre-PEPs, the approach I now like is to make a local PR in my *fork* of 
> > the PEPs repo. Then I'll only change it into a PR against the main repo 
> > when it's time to assign a PEP number.
>
> Using PRs to manage the development of a PEP, whether leading up to
> submission or after initial submission, is fine. But I strongly prefer
> discussions on the content of a PEP to be handled in the actual
> discussion forum (mailing lists or Discourse).Too many people with an
> interest in the subject are likely to miss discussions happening on
> PRs.

Aye, definitely - the cases where I've taken this approach have been
ones where I had co-authors, so we were discussing low level details
from really early on, and the PR served as the venue for coming to an
internal agreement amongst the co-authors before presenting the shared
proposal to everyone else.

It's not a substitute for the broader PEP review and discussion
process, but I do believe it can help reduce some of the noise in that
broader discussion (since low level clarifications can be kept
separate from the more conceptual reviews).

Cheers,
Nick.



-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/N2V6AFTPLCRG7LE6TCIHGS2SPKYY6FEJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-08 Thread Nick Coghlan
On Sat, 7 Dec 2019 at 02:42, Michael  wrote:
> A program has exactly - zero (0) of something, one (1) of something, or
> infinite. The moment it gets set to X, the case for X+1 appears.
>
> Since we are not talking about zero, or one - I guess my comment is make
> sure it can be used to infinity.

I suspect the professor saying this hadn't worked on any industrial
systems were it was critically important to degrade gracefully under
load, or done much in the way of user experience design (which is
often as much about managing the way things fail to help guide users
back towards the successful path as it is about managing how the
system behaves when things go well).

One of the first systems I ever designed involved allocating small
modular audio processing components across a few dozen different
digital signal processors. I designed that system so that the only
limits on each DSP was the total amount of available memory, and the
number of audio inputs and outputs. Unfortunately, this turned out to
be a mistake, as it made it next to impossible to design a smart
scheduling engine, since we didn't have enough metadata about how much
memory each component would need, nor enough live information about
how much memory fragmentation each DSP was experiencing. So the
management server resorted to a lot of "just try it and see if it
works" logic, which made the worst case behaviour of the system when
under significant load incredibly hard to predict.

CPython's own recursion limit is a similar case - there's an absolute
limit imposed by the C stack, where if we go over it, we'll get an
unrecoverable failure (a segfault/memory access violation). So instead
of doing that, we impose our own arbitrarily lower limit where we
throw a *recoverable* error, before we hit the unrecoverable one.

So I'm broadly in favour of the general principle of the PEP. However,
I also agree with the folks suggesting that the "10e6 for all the
limits" approach may be *too* simplified.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5CH4X3SDCTZ7KMOE5G4IJSI3O2U2STBF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-08 Thread Jim J. Jewett
There is value in saying "These are things that might be limited by the 
implementation."

There is great value in documenting the limits that CPython in particular 
currently chooses to enforce.  Users may want to see the numbers, and other 
implementations may wish to match or exceed these minimums as part of their 
compatibility efforts.  This is particularly true if it effects bytecode 
validity, since other implementations often try to support bytecode as well as 
source code

There is value is saying "A conforming implementation will support at least X", 
but X should be much smaller -- I don't want to declare micropython 
non-conformant just because it set limits more reasonable for its use case.

I don't know that there is enough value in using a human memorable number (like 
a million), or in using the same limit across resources.  For example, if the 
number of local variables, distinct names, and constants may be limited to 
1,000,000 total instead of 1,000,000 each, I think that should be a quality of 
implementation issue instead of a language change.

There may well be value in changing the limits supported by CPython (or at 
least CPython in default mode), or its bytecode format, but those should be 
phrased as clearly a CPython implementation PEP (or bytecode PEP)  rather than 
a language change PEP.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CN6NSSM2MRXQVVIOTBINP4WI6RPLHB73/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python 3.8 problem with PySide

2019-12-08 Thread Christian Tismer
On 08.12.19 09:49, Nick Coghlan wrote:
> On Fri., 6 Dec. 2019, 3:31 am Christian Tismer,  > wrote:
> 
> Hi guys,
> 
> during the last few weeks I have been struggling quite much
> in order to make PySide run with Python 3.8 at all.
> 
> The expected problems were refcounting leaks due to changed
> handling of heaptypes. But in fact, the runtime behavior was
> much worse, because I always got negative refcounts!
> 
> After exhaustive searching through the different 3.8 commits, I could
> isolate the three problems with logarithmic search.
> 
> The hard problem was this:
> Whenever PySide creates a new type, it crashes in PyType_Ready.
> The reason is the existence of the Py_TPFLAGS_METHOD_DESCRIPTOR
> flag.
> During the PyType_Ready call, the function mro() is called.
> This mro() call results in a negative refcount, because something
> behaves differently since this flag is set by default in mro().
> 
> When I patched this flag away during the type_new call, everything
> worked ok. I don't understand why this problem affects PySide
> at all. Here is the code that would normally be only the newType line:
> 
> 
>     // PYSIDE-939: This is a temporary patch that circumvents the
> problem
>     // with Py_TPFLAGS_METHOD_DESCRIPTOR until this is finally solved.
>     PyObject *ob_PyType_Type = reinterpret_cast *>(&PyType_Type);
>     PyObject *mro = PyObject_GetAttr(ob_PyType_Type,
> Shiboken::PyName::mro());
>     auto hold = Py_TYPE(mro)->tp_flags;
>     Py_TYPE(mro)->tp_flags &= ~Py_TPFLAGS_METHOD_DESCRIPTOR;
>     auto *newType = reinterpret_cast(type_new(metatype,
> args, kwds));
>     Py_TYPE(mro)->tp_flags = hold;
> 
> 
> Isn't this manipulating the flags in the tuple type, rather than
> anything on a custom object? Or is "mro" a custom object rather than an
> MRO tuple?


no, "mro" is the default mro implementation which is a method descriptor
of the standard PyTypeType object.

The implementation of PyType_Ready just touches the mro in some
helper function lookup_maybe_method.

This is so funny: This side effect seems to be totally unrelated to
PySide, but something we are doing wrong.


> If anything, given the combination of factors required to reproduce the
> problem, I would guess that there might be a ref counting problem in the
> __set_owner__ invocations when called on a new type rather than a
> regular instance, and that was somehow affected by the change to
> increment the type refcount in PyObject_Init rather than
> PyType_GenericAlloc.


Thanks a lot! I will try to use that to find finally what's wrong.

Cheers -- Chris


-- 
Christian Tismer :^)   tis...@stackless.com
Software Consulting  : http://www.stackless.com/
Karl-Liebknecht-Str. 121 : https://github.com/PySide
14482 Potsdam: GPG key -> 0xFB7BEE0E
phone +49 173 24 18 776  fax +49 (30) 700143-0023
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VGHUHPRPUTNEKZL7HT22W5V2VWCL6BOZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-08 Thread Richard Damon
On 12/6/19 11:08 AM, Michael wrote:
> On 03/12/2019 17:15, Mark Shannon wrote:
>> Hi Everyone,
>>
>> I am proposing a new PEP, still in draft form, to impose a limit of
>> one million on various aspects of Python programs, such as the lines
>> of code per module.
>>
>> Any thoughts or feedback?
>>
>> The PEP:
>> https://github.com/markshannon/peps/blob/one-million/pep-100.rst
>>
>> Cheers,
>> Mark. 
> Shortened the mail - as I want my comment to be short. There are many
> longish ones, and have not gotten through them all.
>
> One guiding principle I learned from a professor (forgot his name sadly).
>
> A program has exactly - zero (0) of something, one (1) of something, or
> infinite. The moment it gets set to X, the case for X+1 appears.
>
> Since we are not talking about zero, or one - I guess my comment is make
> sure it can be used to infinity.
>
> Regards,
>
> Michael
>
> p.s. If this has already been suggested - my apologies for any noise.
>
The version of this philosophy that I have heard is normally: Zero, One,
Many, or sometimes Zero, One, Two, Many, and occasionally Zero, One,
Two, Three, Many.

The Idea is that the handling of Zero of something is obviously a
different case from having some of it.

Having Just One of something often can be treated differently than
multiple of it, and sometimes it makes sense to only allow 1 of the thing.

Sometimes, having just Two of the things allows for some useful extra
interactions, and can be simpler than an arbritary number, so sometimes
you can allow just two, but not many.

Similarly, there are some more rare cases where maybe allowing just 3
and not more can make sense.

In general, for larger values, if you allow M, then there isn't a good
reason to not allow M+1 (until you hit practical resource limits).

I wouldn't extend that to 'infinity' as there is a big catagorical
difference between an arbitrary 'many' and 'infinite', as computers
being finite machines CAN'T actually have infinite of something without
special casing it. (and if you special case infinite, you might not make
the effort handle large values of many).

-- 
Richard Damon
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EZGVTV6XRS3KFBWWPPJOEHC5LVNWEYQF/
Code of Conduct: http://python.org/psf/codeofconduct/