My overall problem with the PEP and a reason I'd reject it by default it
that it is asking to pre-emptively impose limits on things, some of which
we believe would cause problems to existing long running applications
(limiting the total number of types for example), without having actually
demonstrated practical benefits from such a change.  Having an
implementation that takes limits and runs with them to practical effect
would provide motivation to consider adopting them.

Otherwise it doesn't feel like it solves a real problem, and could cause
some existing users pain.  So what's our real motivation?

picking on some nits within the PEP 611 text:

> Motivation: "poorly generated code could cause values to exceed 2^32"

Malicious or poorly generated code can always do anything so this isn't a
strong motivation.  If you want a nice failure when we come near actual
existing physical limits, that could be done today as a PR without a PEP.

If correctness is the motivation, we could be correct without changing the
existing unchecked limits.

> : "there are two ways to use a 32 bit refcount on a 64 bit machine. One
is to limit each sub-interpreter to 32Gb of memory. The other is to use a
saturating reference count, which would be a little bit slower, but allow
unlimited memory allocation."

Please do not arbitrarily cap sub-interpreter memory usage to a small
value.  32GiB is very small today.

Also, at least one existing eternal refcount implementation I've had
experience with demonstrated a notable hit to interpreter cpu performance
as it required an additional test+branch within the extremely widely used
Py_INCREF and Py_DECREF macro code.

-gps

On Mon, Dec 9, 2019 at 6:10 AM Mark Shannon <m...@hotpy.org> wrote:

> Hi everyone,
>
> Thanks again for all your comments on PEP 611.
>
> I would like to ask a favour; please be more specific in your comments.
>
> Ideally state which part of the PEP you are disagreeing with and why you
> disagree with the relevant part of the rationale/motivation.
>
> Also, when asking for limits to be raised or removed entirely, could you
> state what you perceive to be the costs and benefits of larger limits.
> What do you believe is an acceptable cost in memory or runtime for
> larger limits?
>
> For example, you might say that the limit of one million lines of code
> per module is too small, and that it is worth a small, say 1%, impact on
> speed to allow a larger of limit of 100 million.
>
> If you believe a limit would have no cost, then please give a
> explanation of why that is so.
>
> Merely saying that you would like a larger limit is pointless.
> If there were no cost to arbitrarily large limits, then I wouldn't have
> proposed the PEP in the first place.
>
> Bear in mind that the costs of higher limits are paid by everyone, but
> the benefits are gained by few.
>
> Cheers,
> Mark.
> _______________________________________________
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/ZXDAJKRVSF6UUE5UEPE5PMXYXOLJ5A4V/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EYXC7CDHVC4W2MHY5BTKAVH4IFSFV6LO/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to