[Python-Dev] Re: Travis CI for backports not working.

2019-12-13 Thread Victor Stinner
I created https://bugs.python.org/issue39035

I like Travis CI. It's very close to what I have on my laptop, so it's
usually trivial for me to reproduce Travis CI failures. It's also
quite fast and reliable.

Azure Pipelines were very unstable one year ago. It's getting better,
but there are still some random bugs sometimes. They are not really
blocking, so I didn't report them.

On Travis CI, it's possible to only restart a single job manually when
it's a CI issue (like random networking issue). On Azure Pipelines,
there is no way to restart even all jobs at once. The only workaround
is to close the PR and reopen it. But when you do that on a backport
PR, a bot closes the PR and removes the branch. The backport PR must
be recreated, it's annoying.

https://pythondev.readthedocs.io/ci.html#azure-pipelines-pr

For example, I saw recently apt-get failed to download packages on the
Linux jobs on the Azure Pipelines.

In short, having multiple CIs is a good thing :-)

Victor

Le ven. 13 déc. 2019 à 02:42, Kyle Stanley  a écrit :
>
> Victor Stinner wrote:
> > What is the issue? Can someone please open a bug report at
> https://bugs.python.org/ so I can try to investigate?
>
> From my understanding, it looks to be pyenv related and not something we can 
> fix on our end, at least based on the build logs: 
> https://travis-ci.org/python/cpython/jobs/624160244?utm_medium=notification&utm_source=github_status.
>  This was from a recent backport PR to 3.7 (backport also failing for 3.8 
> with similar issues).
>
> On Thu, Dec 12, 2019 at 8:14 PM Victor Stinner  wrote:
>>
>> What is the issue? Can someone please open a bug report at
>> https://bugs.python.org/ so I can try to investigate?
>>
>> Victor
>>
>> Le ven. 13 déc. 2019 à 02:05, Brett Cannon  a écrit :
>> >
>> > This is failing again, so I had to switch off Travis from being a 
>> > requirement (again).
>> >
>> > I'm not not going to flip it back on until Travis has been stable for a 
>> > month as I don't like being the blocker on stuff when I can help it. And 
>> > if Travis isn't stable in a month then we might need to start talking 
>> > about turning it off entirely as flaky CI is never useful.
>> > ___
>> > Python-Dev mailing list -- python-dev@python.org
>> > To unsubscribe send an email to python-dev-le...@python.org
>> > https://mail.python.org/mailman3/lists/python-dev.python.org/
>> > Message archived at 
>> > https://mail.python.org/archives/list/python-dev@python.org/message/YJRPHEDV6DRVMSXCORRDUDCEFVYP4QUI/
>> > Code of Conduct: http://python.org/psf/codeofconduct/
>>
>>
>>
>> --
>> Night gathers, and now my watch begins. It shall not end until my death.
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at 
>> https://mail.python.org/archives/list/python-dev@python.org/message/REAVUXBHB7RSKVG4IZTPXYFDMJJF4TWB/
>> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/UMLNZHA4FTRKQJHR72UD3Y7N37U4PZ5Q/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-13 Thread Steve Dower

On 13Dec2019 0233, Victor Stinner wrote:

Azure Pipelines were very unstable one year ago. It's getting better,
but there are still some random bugs sometimes. They are not really
blocking, so I didn't report them.


The only ones I'm aware of are macOS builds failing (which don't run on 
Travis CI, so the only thing stopping these bugs landing is Azure 
Pipelines) and network related issues. But I'm guessing since you said 
"random" bugs that they don't repro well enough to assign blame properly.



On Travis CI, it's possible to only restart a single job manually when
it's a CI issue (like random networking issue). On Azure Pipelines,
there is no way to restart even all jobs at once. The only workaround
is to close the PR and reopen it. But when you do that on a backport
PR, a bot closes the PR and removes the branch. The backport PR must
be recreated, it's annoying.


The UI is getting better here, but given GitHub Actions now has similar 
CI functionality I wouldn't expect Pipelines to focus as much on their 
integration with GitHub (in particular, being able to authorize an 
GitHub team to log in to our Pipelines instance - as we can with Travis 
- has been preventing people from rerunning individual jobs).


If people are generally happy to move PR builds/checks to GitHub 
Actions, I'm happy to merge https://github.com/zooba/cpython/pull/7 into 
our active branches (with probably Brett's help) and disable Azure 
Pipelines? (I'd like to keep it running for post-merge builds and the 
manually triggered ones I use for Windows releases and dependencies.)



In short, having multiple CIs is a good thing :-)


Agreed, though it would also be nice to have a way to dismiss a failure 
after investigating and merge anyway. Only repo administrators can do that.


Cheers,
Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/S4BVOIW2CPPZ5TIDFPH6CPPG5P3OXA34/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Travis CI for backports not working.

2019-12-13 Thread Brett Cannon
Steve Dower wrote:
> On 13Dec2019 0233, Victor Stinner wrote:
> > Azure Pipelines were very unstable one year ago. It's
> > getting better,
> > but there are still some random bugs sometimes. They are not really
> > blocking, so I didn't report them.
> > The only ones I'm aware of are macOS builds failing (which don't run on 
> Travis CI, so the only thing stopping these bugs landing is Azure 
> Pipelines) and network related issues. But I'm guessing since you said 
> "random" bugs that they don't repro well enough to assign blame properly.
> > On Travis CI, it's possible to only restart a single
> > job manually when
> > it's a CI issue (like random networking issue). On Azure Pipelines,
> > there is no way to restart even all jobs at once. The only workaround
> > is to close the PR and reopen it. But when you do that on a backport
> > PR, a bot closes the PR and removes the branch. The backport PR must
> > be recreated, it's annoying.
> > The UI is getting better here, but given GitHub Actions now has similar 
> CI functionality I wouldn't expect Pipelines to focus as much on their 
> integration with GitHub (in particular, being able to authorize an 
> GitHub team to log in to our Pipelines instance - as we can with Travis
> 
> has been preventing people from rerunning individual jobs).
> 
> If people are generally happy to move PR builds/checks to GitHub 
> Actions, I'm happy to merge https://github.com/zooba/cpython/pull/7
> into 
> our active branches (with probably Brett's help) and disable Azure 
> Pipelines?

I'm personally up for trying this out on master, making sure everything runs 
fine, and then push down into the other active branches.

-Brett

> (I'd like to keep it running for post-merge builds and the 
> manually triggered ones I use for Windows releases and dependencies.)
> > In short, having multiple CIs is a good thing
> > :-)
> > Agreed, though it would also be nice to have a way to dismiss a failure 
> after investigating and merge anyway. Only repo administrators can do that.
> Cheers,
> Steve
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZZZ3JLGKIEQE2H6D72PFP32INQ6HXH43/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Summary of Python tracker Issues

2019-12-13 Thread Python tracker


ACTIVITY SUMMARY (2019-12-06 - 2019-12-13)
Python tracker at https://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open7174 ( -6)
  closed 43617 (+58)
  total  50791 (+52)

Open issues with patches: 2818 


Issues opened (30)
==

#37404: asyncio sock_recv blocks on ssl sockets.
https://bugs.python.org/issue37404  reopened by christian.heimes

#38989: pip install selects 32 bit wheels for 64 bit python if vcvarsa
https://bugs.python.org/issue38989  opened by Ryan Thornton

#38993: cProfile behaviour issue with decorator and math.factorial() l
https://bugs.python.org/issue38993  opened by AVicennA

#38999: Python launcher on Windows does not detect active venv
https://bugs.python.org/issue38999  opened by Alexandros Karypidis

#39005: test_faulthandler: test_dump_traceback_later_file() fails rand
https://bugs.python.org/issue39005  opened by vstinner

#39010: ProactorEventLoop raises unhandled ConnectionResetError
https://bugs.python.org/issue39010  opened by Jonathan Slenders

#39011: ElementTree attributes replace "\r" with "\n"
https://bugs.python.org/issue39011  opened by mefistotelis

#39014: test_concurrent_futures: test_crash() timed out on AMD64 Fedor
https://bugs.python.org/issue39014  opened by vstinner

#39015: DeprecationWarnings of implicitly truncations by __int__ appea
https://bugs.python.org/issue39015  opened by lukasz.langa

#39017: Infinite loop in the tarfile module
https://bugs.python.org/issue39017  opened by jvoisin

#39019: Missing class getitems in standard library classes
https://bugs.python.org/issue39019  opened by BTaskaya

#39020: [AIX] module _curses fails to build since ESCDELAY has been ad
https://bugs.python.org/issue39020  opened by Michael.Felt

#39021: multiprocessing is_alive() between children processes
https://bugs.python.org/issue39021  opened by matttimms

#39025: Windows Python Launcher does not update PATH to Scripts direct
https://bugs.python.org/issue39025  opened by bluebird

#39026: pystate.h contains non-relative of initconfig.h include causin
https://bugs.python.org/issue39026  opened by gaige

#39027: run_coroutine_threadsafe uses wrong TimeoutError
https://bugs.python.org/issue39027  opened by janust

#39028: ENH: Fix performance issue in keyword extraction
https://bugs.python.org/issue39028  opened by seberg

#39029: TestMaildir.test_clean fails randomly under parallel tests
https://bugs.python.org/issue39029  opened by xtreak

#39030: Ctypes unions with bitfield members that do not share memory
https://bugs.python.org/issue39030  opened by dankreso

#39031: Inconsistency with lineno and col_offset info when parsing eli
https://bugs.python.org/issue39031  opened by lys.nikolaou

#39032: wait_for and Condition.wait still not playing nicely
https://bugs.python.org/issue39032  opened by criches

#39033: zipimport raises NameError: name '_boostrap_external' is not d
https://bugs.python.org/issue39033  opened by misho88

#39034: Documentation: Coroutines
https://bugs.python.org/issue39034  opened by agarus

#39035: Travis CI fail on backports: pyvenv not installed
https://bugs.python.org/issue39035  opened by vstinner

#39036: Add center_char attribute to str type
https://bugs.python.org/issue39036  opened by lovi

#39037: Wrong trial order of __exit__ and __enter__ in the with statem
https://bugs.python.org/issue39037  opened by maggyero

#39038: OverflowError in tarfile.open
https://bugs.python.org/issue39038  opened by jvoisin

#39039: zlib.error with tarfile.open
https://bugs.python.org/issue39039  opened by jvoisin

#39040: Wrong attachement filename when mail mime header was too long
https://bugs.python.org/issue39040  opened by mkaiser

#1021318: PyThreadState_Next not thread safe
https://bugs.python.org/issue1021318  reopened by vstinner



Most recent 15 issues with no replies (15)
==

#39035: Travis CI fail on backports: pyvenv not installed
https://bugs.python.org/issue39035

#39032: wait_for and Condition.wait still not playing nicely
https://bugs.python.org/issue39032

#39030: Ctypes unions with bitfield members that do not share memory
https://bugs.python.org/issue39030

#39021: multiprocessing is_alive() between children processes
https://bugs.python.org/issue39021

#39019: Missing class getitems in standard library classes
https://bugs.python.org/issue39019

#39017: Infinite loop in the tarfile module
https://bugs.python.org/issue39017

#39014: test_concurrent_futures: test_crash() timed out on AMD64 Fedor
https://bugs.python.org/issue39014

#39005: test_faulthandler: test_dump_traceback_later_file() fails rand
https://bugs.python.org/issue39005

#38976: Add support for HTTP Only flag in MozillaCookieJar
https://bugs.python.org/issue38976

#38963: multiprocessing processes seem to "bleed" user information (GI
https://bugs.python.org/issue38963

#38961: Flaky detection of compiler

[Python-Dev] Re: PEP 611: The one million limit.

2019-12-13 Thread Chris Barker via Python-Dev
I am not qualified to comment on much of this, but one simple one:

1 million is a nice round easy to remember number.

But you can fit 2 million into 21 bits, and still fit three into 64 bits,
so why not?

Ialso noticed this:

> Reference Implementation
> 
> None, as yet. This will be implemented in CPython, once the PEP has been
accepted.

As already discussed, we really can't know the benefits without some
benchmarking, so I don't expect the PEP will be accepted without a (at
least partial) reference implementation.

-CHB

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SD7WM5XHGH42HHVMXX53Z4TKGVZEOQRI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Please be more precise when commenting on PEP 611.

2019-12-13 Thread Richard Damon
On 12/13/19 12:27 AM, Steven D'Aprano wrote:
> On Wed, Dec 11, 2019 at 10:30:15PM -0500, Richard Damon wrote:
>
>> I will way that in my many years of
>> programming experience I can't think of any great cases where a language
>> as part of the language definition limited to 'size' of a program to
>> good effect,
> Good thing that's not what the PEP proposes then :-)
>
> The PEP talks about CPython implementation, not hard limits on all 
> Python compilers/interpreters.

Not the way I read the PEP.

It talks about changing the language.

Under 'Enforcement' it admits it can't 'force' other implementations but
says other implementations should generate the same errors unless doing
so hurts performance, so it is a language change, not just limits for a
given implementation.

As I said originally, I have no problem with the idea of creating a
variant implementation of CPython with this sort or documented limits to
demonstrate that it does provide a real benefit. We would then be in a
good place to determine the real costs and benefits, and one of the
branches might just wither and die because it isn't useful enough anymore.

>
>> and generally such limits relegate a language into being
>> seen as a 'toy language'. 
> The designers of C recognised that, in practice, compilers will have 
> limits. Rather than demanding "NO ARBITRARY LIMITS!!!" they specified 
> *minimum* levels for compliant compilers. Often quite low limits.
>
> Actual compilers often impose their own limits:
>
> https://gcc.gnu.org/onlinedocs/cpp/Implementation-limits.html
>
> https://www.cs.auckland.ac.nz/references/unix/digital/AQTLTBTE/DOCU_012.HTM
Yes, as I said above, there is a big definition between an
implementation of a language documenting some of its limits and the
language definition itself limiting what the language can do.
>
> So if Mark's proposal relegates Python to a "toy language", we'll be in 
> good company with other "toys" that have implementation, or even 
> language, limits:
>
>
> https://stackoverflow.com/questions/5689798/why-does-java-limit-the-size-of-a-method-to-65535-byte

There is a big difference between limiting the size of a single method
and limiting the total number of classes allowed in a program. The first
can be largely gotten around by refactoring the method to be implemented
in multiple methods, the latter can't be.

> https://web.archive.org/web/20160304023522/http://programmers.stackexchange.com/questions/108699/why-does-javas-collection-size-return-an-int
Yes, Java made a decision early in its life cycle to lock itself into
fixed sized types, and
> https://www.sqlite.org/limits.html
These are a very different type of limits. These are defines that the
programmer can change to establish what the various limits are. They can
be increased or decreased as desired by the programmer (with natural
upper limits based on the size of certain fundamental types).
>
>
> (If you read only one of those five links, please read the last.)
>
>
>> The biggest issue is that computers are
>> growing more powerful every day, and programs follow in getting bigger,
>> so any limit that we think of as more than sufficient soon becomes too
>> small (No one will need more than 640k of RAM).
> The beauty of this proposal is that since its an implementation limit, 
> not a law of nature, if and when computers get more powerful and 
> machines routinely have multiple zettabyte memories *wink* we can always 
> update the implementation.
>
> I'm not entirely being facetious here. There's a serious point. Unlike 
> languages like C and Java, where changes have to go through a long, 
> slow, difficult process, we have a much more agile process. If the PEP 
> is accepted, that doesn't mean we're locked into that decision for life. 
> Relaxing limits in the future doesn't break backwards compatibility.
>
> "What if computers get more powerful? Our limits will be obsolete!!!" 
> Naturally. Do you still expect to be using Python 3.9 in ten years time 
> with your fancy new uber-hyper-quantum ten thousand gigabyte computer? 
> Probably not. As hardware grows, and our needs grow, so can the 
> hypothetical limits.
>
> What is valuable are *concrete*, actual (not theoretical) examples of 
> where Mark's proposed limits are too low, so that we can get a realistic 
> view of where the potential tradeoffs lie:
>
> * lose this much functionality (code that did run, or might have run, 
>   but that won't run under the PEP)
>
> * in order to gain this much in safety, maintainability, efficiency.
>
>
> And before people jump down my throat again, I've already said -- on 
> multiple occassions -- that the onus is on Mark to demonstrate the 
> plausibility of any such gains.
>
>
> Thank you for reading :-)
>
As I said, the proposal as listed on Python.org is a language change,
not a proposal for an implementation that has a set of limits.

To do the former, really needs much more evidence to make it a
reasonable course of action, the latter, if rea